Stay networked. Get informed. Broadcast your projects.
Serendipity – happy accident – seems to be governing my life at the moment. Last week I was asking for opinions about how to deal with “Facebook” at work. The responses I received generally agreed with the majority CARIBnog position - to block the site/s. I still have questions, one of the first ones being – how does the network manager decide which sites to block? What constitutes “improper use of bandwidth (and time)”?
This morning in my mail was the latest post from John Carr’s Wordpress blog Desiderata. His subject is the new European Union (EU) Directive on combatting child sexual abuse, sexual exploitation of children and child pornography, which has just one more step to take to become European law. I have copied an excerpt below but recommend that you consider following John Carr’s blog which, while its focus is the protection of children, frequently makes reference to other areas of Internet governance in an informed and reliable manner.
For me, the sad thing about deletion and blocking is that it is invariably too late; the existence of the website to delete or block is incontrovertible evidence that children have been hurt. And of course this is recognised by the child protection activists who are also working tirelessly on building capacity in children to recognise danger and to defend themselves and on identifying perpetrators to prevent further crimes – there was a very interesting presentation on this work during IGF 6 in Nairobi.
Under the first part of the Article it is now mandatory for every Member State to initiate measures which will ensure
... the prompt removal of webpages containing or disseminating child pornography hosted in their territory and to endeavour to obtain the removal of such pages hosted outside of their territory.
The second part of Article 21 addressed blocking. This was the bit which undeservedly hogged the limelight.
Blocking entails creating and distributing to relevant companies a list of web addresses known to contain child abuse images. As soon as the images have been deleted on the remote server the address comes off the list. Until that point the operation of the list renders the address and hence the illegal images inaccessible to the great majority of internet users on that service who do not have the technical knowledge to find a work around or an alternative source.
Blocking was nobody's preferred option. Deletion at source is the best answer. Always. The argument over blocking was simply about what view you took where the material stayed on the remote server for longer than was reasonable.
As it becomes harder to propagate child abuse images using the web other technical solutions for other online environments will be of increasing importance but the need to have tools to deal with websites containing child abuse images is unlikely ever to go away.”
So how do these ideas connect? At root the problem in each case is human behaviour. My argument is that blocking does not change human behaviour. It’s a “quick and easy” technical fix, which solves the technology problem – it frees up bandwidth - but doesn’t address the human problem. Technology functions by the norms of technology which is perfectly acceptable. The problems come when people begin to function by the norms of technology, without seeing a necessity to learn the “human” norms.
As an aside to this, and as my “week 2” problem which I would be grateful for some other opinions about:
I am aware of a case where a consultant was contracted to assist in writing policy for an educational institution. The consultant’s report was the advice that there is no need to re-invent the wheel and approximately 400 pages of academic policies downloaded from the Internet. This strikes me as highly unsatisfactory as it seems to me to be important that policy should be contextualised to the local situation that it is supposed to be addressing. Any comments?