Harmful Digital Communications Bill: panacea or problem?

Dr Ursula Cheer, Professor Law, University of Canterbury, ChristchurchUrsula Cheer, Professor of Law, University of Canterbury, Christchurch, comments on the Harmful Digital Communications Bill.


 

 Is it a panacea or a problem?

The Government is in the process of solving cyber-bullying.  Minister Amy Adams resolutely believes in the Harmful Digital Communications Bill. The Bill provides a civil regime based on a set of communication principles and an approved Agency with the power to receive and assess complaints about harm caused by digital communications. The Agency would focus on mediating the complaint and seeking voluntary take-down if appropriate, but has no powers to make orders. If the complaint cannot be resolved at this level, the complainant may take the matter to the District Court. The Bill also contains a criminal offence of posting a harmful digital communication with intent to cause significant emotional distress punishable by up to 2 years imprisonment, or a fine.

Digital Communication Principles

The regime is based on a set of Digital Communication Principles, which are:

Principle 1

A digital communication should not disclose sensitive personal facts about another individual.

Principle 2

A digital communication should not be threatening, intimidating, or menacing.

Principle 3

A digital communication should not be grossly offensive to a reasonable person in the position of the affected individual.

Principle 4

A digital communication should not be indecent or obscene.

Principle 5

A digital communication should not be part of a pattern of conduct that constitutes harassment.

Principle 6

A digital communication should not make a false allegation.

Principle 7

A digital communication should not contain a matter that is published in breach of confidence.

Principle 8

A digital communication should not incite or encourage anyone to send a message to an individual with the purpose of causing harm to the individual.

Principle 9

A digital communication should not incite or encourage another individual to commit suicide.

Principle 10

A digital communication should not denigrate an individual by reason of his or her colour, race, ethnic or national origins, religion, gender, sexual orientation, or disability. 

Both the approved Agency and courts have to take account of the communication principles and act consistently with the rights and freedoms contained in the New Zealand Bill of Rights Act 1990 when operating under the Act, the courts because the Bill requires it, and the Agency because our developing Bill of Rights jurisprudence requires that those administering legislation that impacts on speech must take account of the Bill.

The principles appear to cover most of the forms of harmful speech recognised in New Zealand law. It is possible, then, that they would regulate more speech online and have more serious effects than regulation of offline speech. However, the Bill does contain a number of safeguards against this. First, only an individual who has suffered harm can make a complaint. Harm is defined as serious emotional distress.[1] Thus the threshold is high.  Second, a Court can only consider a complaint if the Agency has already attempted to deal with it, and then only if there has been a serious, repeated or threatened breach of one or more of the Principles, and this has caused or is likely to have caused harm.[2]

Remedial powers in the Bill

The remedial powers in the Bill are practical and extensive. The Court may make interim orders pending determination of an application for a final order.[3] Both interim or final orders can take the following forms:

  • an order to take down or disable material:
  • an order that the defendant cease the conduct concerned:
  • an order that the defendant not encourage any other persons to engage in similar communications towards the affected individual:
  • an order that a correction be published:
  • an order that a right of reply be given to the affected individual:
  • an order that an apology be published.[4]

The following orders can be made against content hosts:

  • an order to take down or disable public access to material:
  • an order that the identity of the author of an anonymous or pseudonymous communication be released to the court:
  • an order that a correction be published:
  • an order that a right of reply be given to the affected individual.[5]

There is also power to direct orders to apply to third parties, to order a declaration that a Principle has been breached, and to order suppression of names.[6] Such declarations would not bind host of offshore websites, but are intended to have symbolic effect.

A court is required to consider a number of factors before making an order, including the purpose of the communicator, and whether the communication was intended to harm. This does not mean lack of fault will result in no order being made. In some cases, even if there is not fault at all, it may still be a good idea to order removal of material from the internet, or publication of a correction in a relevant form. However, the inclusion of this factor means that the issue of fault must be addressed and weighed in some way before any order is made. The Bill also requires the question whether the communication is in the public interest to be addressed, as well as the truth or falsity of the statement. Additionally, a court  must consider the occasion, context and subject matter of the communication, the content of the communication and the level of harm caused, the extent to which the communication has spread beyond the original parties, the age and vulnerability of the affected individual, the conduct of the defendant, including any attempt to minimise the harm, the conduct of the affected individual or complainant, the technical and operational practicalities and the costs of an order, and the appropriate individual or other person who should be subject to the order.[7]

I believe the Bill is a risk, and a classic example of the difficulties of legislating for political reasons to deal with prominent single issues.

All of this is well-intentioned and one can hardly argue against the intent of the Government in pushing on with the legislation. Nevertheless, I believe the Bill is a risk, and a classic example of the difficulties of legislating for political reasons to deal with prominent single issues.  I do support its broad aim but it is an unknown, new and quite complex legal regime that will have impact on online speech that may be greater or just different from what would apply if the same speech was not made online but in another medium.  My main concerns are set out below. A number of them highlight risks associated with implementing the Bill.

My main concerns

  • Who is going to be the ‘approved agency’?

    Netsafe has been suggested as an appropriate body. However, Netsafe is an interest group geared up to advance the cause of victims. How is Netsafe going to approach complaints in a balanced and nuanced way, especially when freedom of expression issues arise? Whatever agency is appointed, it is crucial that is properly resourced to deal with valid complaints and weed out the frivolous ones, and simply deal with what are likely to be numerous complaints, given the extent of online communication now. It would be highly undesirable to create an attractive complaints regime like the Office of the Privacy Commissioner, and then set it up to have a backlog of cases, as has happened there.

  • How accessible will the low-level complaints process via the Agency actually be?

    How much real interest is there going to be in mediation ?

    I think the main focus will be on take-down. The problem is that mediation takes time, but what complainants will be seeking will be take-down as soon as possible. The Agency cannot make orders and the system also makes no real provision for threatened publication to be prevented by order because before a Court can make an interim order, the Agency must have tried to mediate or otherwise negotiate the complaint. If the Agency cannot prevent publication through its processes, an order may come too late. So how useful will it be to go to the Agency rather than directly to the ISP and ask them to take the material down?

  • The Bill provides safe harbour protection for ISPs

    The Bill provides safe harbour protection for ISPs which, once notified of offending material, use the special processes in the Bill to notify the poster of the material as soon as possible but within 48 hours of the complaint.  The poster then has 48 hours to respond.  If the poster cannot be identified, the material can be taken down in the meantime. Following the process immunises the ISP against any possible legal claim by the complainant. Take-down regimes are being used world-wide now but their effectiveness is not yet proven. NZ ISPs, especially if they are small businesses, will be inclined to take-down as soon as possible to avoid trouble. That in itself is a chilling effect on freedom of expression.  Additionally, if they do prefer to take down immediately and don’t follow the 48 hour plus 48 hour period requirement in the bill to get in touch with the poster first, they cannot claim the safe harbour protection in the bill.

  • Height of threshold liability

    The criminal offence does set the threshold liability levels high, as they should be, but there will be a risk that children may be unnecessarily criminalised, and indeed, so will adults who have just been stupid and thoughtless. Whether or not prosecution follows is a matter of police discretion, and we have no way of monitoring how that discretion will be used (although bad cases will be very obvious, but by then, it is too late for the person who has been prosecuted).  Police will have to be very well-trained and sensitive.

  • Some principles are problematic

    Many of the communication principles may encourage frivolous or vexatious complaints and the Bill contains provisions to allow the Agency or a District court to weed these out and the publication has to have caused harm which is serious enough.  However, some principles are problematic because they  require very difficult judgments to be made before the Agency can decide if a principle has even been breached. Eg:

  • Principle 3

A digital communication should not be grossly offensive to a reasonable person in the  position of the affected individual.

Suppose a Muslim complains about cartoons of the Prophet? Does this mean the Agency has to put itself in the shoes of a reasonable Muslim, or a reasonable but culturally aware non-Muslim?

  • Principle 4

A digital communication should not be indecent or obscene.

What is one person’s obscenity is another’s work of art.

  • Principle 7

A digital communication should not contain a matter that is published in breach of confidence.

Sometimes it is a good idea for confidential information to be published – the law of breach of confidence protects this with a specified public interest defence. There is no such defence in the bill, although freedom of expression has to be taken into account along with other matters when breach is considered by the Agency.

  • Principle 10

A digital communication should not denigrate an individual by reason of his or her colour, race, ethnic or national origins, religion, gender, sexual orientation, or disability. What is denigration?

All of these principles are already covered by other areas of the law where over time, accepted approaches and processes and defences have been developed.  Are these to be applied to the principles too or can the Agency and the District Court simply go off in a new direction now? The Law Commission, in its advice to Cabinet on which this bill is based, certainly suggested that at the Agency level, it would not be expected that a highly legalistic approach be taken.  But these principles require legal and consistent interpretation.  It seems unlikely the Agency can do its job properly and fairly without some legal expertise. That feeds back into the point made about good resourcing.

One-issue law reform is always risky.  The operation of the Bill will need careful monitoring.

Foot notes

[1]               Harmful Digital Communications Bill (168-2) (2014), cl 4.

[2]               Harmful Digital Communications Bill (168-2) (2014), cl. 11(1) and (2)

[3]              Harmful Digital Communications Bill (168-2) (2014), cl. 16.

[4]              Harmful Digital Communications Bill (168-2) (2014), cl. 17(1).

[5]              Harmful Digital Communications Bill (168-2) (2014), cl. 17(2)

[6]              Harmful Digital Communications Bill (168-2) (2014), cl. 17(4).

[7]               Harmful Digital Communications Bill (168-2) (2014), cl. 17.

 


 

Dr Ursula Cheer, Professor Law, University of Canterbury, Christchurch Article by Ursula Cheer, Professor of Law, University of  Canterbury,  Christchurch


 

About the Contributor

Thomson Reuters NZ

We deliver accurate, relevant and timely information and information management solutions for legal, tax and accounting, HR, & health and safety businesses, professionals, academics and students.

View more by Thomson Reuters NZ