In Orwell’s dystopian classic 1984, the omnipresent state, Big Brother, has the ability to look into the homes of any person at any time. Telescreens are mandatory in all houses, and the state can call citizens to answer for their actions, their conversations and even, it seems, their thoughts at any time.
Now that the New Zealand government has launched, more or less in secret, an internet filter designed to block objectionable content, some are questioning whether Orwell’s vision has come to New Zealand. Such a radical step – the filtering of a formerly free and unregulated internet – has shocked many commentators, who are troubled by both the implications of such a bold step and the clandestine nature of its execution.
However supporters of the filter argue that, given the severity of the problem – the production and trafficking of images of child abuse – such a step can be justified as a necessary, practical method of addressing, at least in part, a ghastly and very real social problem.
So where did the filter come from?
“It began two or three years ago now, when we became aware of the possibility of a useful filtering system being available,” says Keith Manch, Deputy Secretary, Regulation and Compliance, Department of Internal Affairs.
“While [the Censorship Compliance Unit] deals with the general classification system and the labels you see on things, and magazines, and books, and videos and DVDs, the majority of [the child abuse material problem] deals with the trading, making and possession of objectionable images on the internet, and in chat rooms and the like. And we are aware that there is quite a lot of activity relating to websites that carry these images. So from a general sort of compliance/performance model perspective, if you can prevent some of the offending, then that’s a good thing to do.”
So what does the filter do?
The filter is designed to block access to objectionable material (defined as “websites that contain publications that promote or support, or tend to promote or support, the exploitation of children, or young persons, or both, for sexual purposes” in the filter’s code of practice). It affects only those using ISPs that have joined the filter scheme.
Working from a list of websites known to contain images of child sexual abuse, the filter, via a piece of software called a ‘NetClean Whitebox’, inspects every URL request made by the customers of ISPs that have joined the project. If a request matches one on the list – that is, if a URL that is known to contain child sexual abuse images is requested – access to that site is denied, and the user is redirected to a page warning the user that they have attempted to access objectionable material [see Fig 1]. There is a form on the screen by which users can appeal the blocking.
Additions to the list of objectionable websites are cross-checked and signed off by multiple officials (including a representative from the Censor’s Office), and a review of all additions is carried out by an independent review group.
So far only two small ISPs have announced that they have adopted the filter – WatchDog and MaxNet – who collectively account for less than one percent of New Zealand’s internet traffic. Vodafone/Ihug, TelstraClear and Xtreme Networks have announced, however, that they intend to adopt the filter at some stage.
What does it not do?
The filter is not, admits the DIA, a silver bullet to eliminate child abuse material from the internet or to stop its production. It is, rather, a small part of the DIA’s ongoing prevention activities.
Officials have said that the filter will not be used for enforcement. Records of who has attempted to access what are not kept (no offensive material has been accessed; therefore no crime has been committed).
The filter is not intended to act as a ‘safety device’ for web surfers, and parents are urged to maintain the safety precautions they currently use for their children online.
So what’s the problem?
Most of us, when confronted with descriptions of child abuse, are outraged. Child abuse is terrible; therefore surely the filter, especially one as seemingly benign and unobtrusive as this, must be a good idea, right? Not necessarily, say the critics.
Both advocates and critics of the filter project agree that it will address only a tiny fraction of problem. It is believed that only a very small percentage of this sort of material is accessed via websites anyway (traffickers preferring methods such as peer-to-peer networks, FTP sites and other, less public, channels). Given this limited effectiveness, some critics therefore find the filter plan too radical, especially considering it has the potential to create numerous problems of its own.
One of these problems is the issue of secrecy. It has shocked many commentators that something as drastic as an internet filter has been launched, more or less, in secrecy by the New Zealand government.
Some government-issued literature makes comparisons between the work of the Chief Censor and the effect that the filter will have. There is a distinction that needs to be made here, however: the Censor is publicly accountable. The Censor is obliged, by law, to publish a list of everything refused classification in this country. His work is open and transparent. The work of the filter is not. It operates by means of a secret blacklist which will never be disclosed to the public in any fashion other than via a leak.
The idea that, over time the scope of the filter will increase to include other types of material, poses a very real concern with a scheme like this. No sooner had plans been announced for a compulsory internet filter in Australia than it became an easy target for political power plays (see news item on page 12). Politicians proposed that the filter’s scope be widened to include other types of controversial material, including anorexia and euthanasia sites. Given that the list is to be kept secret, the potential for this sort of abuse is a very real issue to civil liberties advocates.
And guarantees offered that scope creep will not occur are problematic. While it’s heartening to see that the government is taking this concern seriously, there simply is no way to ensure that at some point in the future, under some circumstance, someone won’t try to appropriate the filter as a means to some other end. Attitudes change, governments change, and the filter represents a very attractive and powerful way to control information. There are checks and balances in place, and they should be applauded, but there is no way to guarantee against the unforeseeable. What will this filter be used for 10, 20 years or 30 years from now? As events across the Tasman have shown us, once it is technically possible to filter, there will be appeals made to increase the scope.
Further compounding the issue is that no one is really sure just how secure the system is.
Although the government has conducted a two-year test of the filter, and vouches for its security, critics such as Tech Liberty (techliberty.org.nz) say that the filter “poses a risk to the security and stability of the New Zealand internet”.
Jamie Cairns, president of the Internet Service Providers Association of New Zealand, says that while he feels the system is secure, it’s not without its flaws.
“Security in the true sense of the word, I personally don’t think that there’s an issue with it... but I think that, reliability-wise, yes, there could be an issue... It will probably work just fine, but the potential is that if [the filter’s administrators] screw that up, then they could potentially knock out other things.”
NetGuide’s Deputy Editor, Duncan Campbell, is a member of the independent reference group. He didn’t hesitate when invited to take part, believing that the more pairs of eyes watching what the filter is doing, the better.
“I’m under no illusions about how effective this system is likely to be,” he says. “It’s no substitute for the sort of enforcement work that the Censorship Compliance Unit is doing – they’re the ones who catch those who view and share this sort of material. But if it helps to deter the casual viewer, who gets a web address from a friend and wants to look out of curiosity, then I’m all for it. That sort of casual interest can lead to obsession.
“I believe that other members of the group share my concern that one day another Minister might try to broaden the scope of the filter beyond child abuse images. But it should be remembered that ISPs have the right to opt out of the filter if that happens. I’ve made it plain that I’ll resign too – and state publicly my reasons for doing so.”
The subject of child abuse is certainly an emotionally charged issue. But to be concerned with the ramifications of an internet filter is in no way to condone the atrocious material that the filter is designed to block. Both critics and advocates of the filter consider child abuse material intolerable and its traffickers despicable.
And there is no doubt that the people behind this project are operating with the best of intentions. But the fact remains that the government using a filter to remove material that it doesn’t like from the internet is a very, very radical step indeed, and one that is riddled with the potential for abuse. Nevertheless, for better or worse, the filter is going ahead.
And it seems that many of us are willing to make this concession – to filter out some of the more terrible aspects of humanity that this ‘free and open’ forum has enabled. Fair enough. But in our age, just as in Orwell’s, “there is no such thing as ‘keeping out of politics’”. If we remain indifferent to the ramifications of what an internet filter means, and what it is capable of doing, it is at our own peril.