This brain package always includes two naive and demonstrably false beliefs. One is that safe backdoors exist so that all the good guys can come and go as they please without any of the bad guys being able to do the same. The other is that everyone will be nice to each other if we know their names. This big bad box of baloney blipped up again this week as part of the government’s consultation for the Online Safety (Basic Online Safety Expectations) Determination 2021 (BOSE) – the more detailed rules for how the somewhat rushed new Online Safety Act 2021 will work. Section 8 of the draft BOSE [PDF] is based on that first belief. “If the service uses encryption, the provider of the service will take reasonable steps to develop and implement processes to detect and address material or activity on the service that is or may be unlawful or harmful,” it says. It should go without saying that if the service provider can see whether something might be unlawful then it’s not actually encrypted, but the government seems to have trouble understanding this point.
Wishing harder won’t bring you that magical decryption pony
The simple fact is that if good guys can decrypt the data when they’re given some sort of authority, then so can the bad guys that use some sort of forged authority. And they will.
Anyone who’s studied the theoretical innards of computing science knows that this falls into a class of unsolvable problems. It just can’t be done.
It’s the mathematics, stupid.
For those who don’t understand that maths is real, reality can also be understood through thoughtful observation.
If there was a way to determine who is and isn’t legitimately allowed to decrypt a message, or be given any kind of access to private data, then we’d already be using it, and hacking wouldn’t exist. This does not seem to have happened.
Simply wishing harder won’t get you that particular pony for Christmas.
Section 9 of the draft BOSE is based on the second belief, anonymity.
“If the service permits the use of anonymous accounts, the provider of the service will take reasonable steps to prevent those accounts being used to deal with material, or for activity, that is or may be unlawful or harmful,” it says.
Those “reasonable steps” could include “processes that prevent the same person from repeatedly using anonymous accounts to post material, or to engage in activity, that is unlawful or harmful,” or “having processes that require verification of identity or ownership of accounts”.
More than two decades of experience has shown that having people’s names doesn’t stop the abuse.
Just one recent example is the online racist abuse of English football players via Twitter, where 99% of accounts suspended for sending racist abuse were not anonymous.
Indeed, having people’s identities or other personal information available is itself a risk. It takes but moments to find many, many examples of police misusing their data for personal purposes.
Even if we could limit access to legitimate authorities – which we can’t – we can never know if their reason for access is legitimate.
Why is the online world becoming more restricted than offline?
According to the government’s consultation paper [PDF]: “A key principle underlying the Act is that the rules and protections we enjoy offline should also apply online”. But that’s simply not the case. As digital rights advocate Justin Warren explained in a Twitter thread, the Online Safety Act actually requires a much greater level of safety than exists in the offline world. “The doors in my house aren’t safe because I can jam my fingers in them. Same with all the cupboards. So could any 12-year-old,” he wrote. Section 12 of the draft BOSE discusses the protection of children from harm. It proposes “reasonable steps” such as age verification systems, something the UK abandoned as impractical, and “conducting child safety risk assessments”. “I note that we don’t make newspapers or broadcast television conduct child safety risk assessments before letting overpaid columnists talk at length about ‘cultural Marxism’,” Warren wrote. “We also let [ABC TV program] Play School teach kids how to make a drum from household items while their parents are trying to work at home during lockdown and I want to see that child safety risk assessment.” Conversely, the government doesn’t make Westfield monitor the conversations of people in the shopping mall food court in case they’re planning a bank robbery, yet that’s precisely what it now expects online platforms to do. It even expects them to figure out what is and isn’t harmful, both now and into the future. “Service providers are best placed to identify these emerging forms of harmful end-user conduct or material,” says the discussion paper. Warren is unimpressed, and your correspondent agrees. “This is the government explicitly abdicating its responsibility to consult with the public on what community standards are and wrestle with the difficult question of what ‘harmful end-user conduct or material’ actually is,” he wrote. “Instead of doing its job, the government wants Facebook and Google and other private companies to define what constitutes acceptable content. And tries to claim this is treating online the same as offline.” To see how well this might work in practice, one only has to see how YouTube recently blocked video of a drinking bird toy for being 18+ content. You may click through safely, though, because it’s not.
‘What about my rights?’
While the discussion paper wants us to “enjoy” rules online – an interesting concept – it isn’t so hot on letting us enjoy our right to privacy and our right to freedom of speech and other communication. The only mention of rights in the consultation paper is when the government “reserves the right not to publish a submission”. The only mention of privacy is to tell submitters that their personal information will be handled in accordance with the Privacy Act 1988. The only mention of freedom is to say that submissions might be released under the Freedom of Information Act 1982. It’s the government’s job to protect our rights and freedoms, but in the online world they just can’t be bothered. By delegating these matters to the online platforms, with penalties if they fail to block ill-defined “harmful” conduct or material, they will of course do what is safest for them and err on the side of over-blocking. They will also err towards blocking material which causes them a publicity problem, such as public complaints from small but noisy communities. Restrictions in more authoritarian countries will continue to be propagated globally. “Online services [will] pre-emptively take down LGBT content when gronks brigade the reporting mechanism. An obvious outcome that has already happened in lots of places but that AusGov will ignore. Again,” Warren wrote. Of course this is only a consultation paper. The government has called for public submissions, and we have until October 15 to change its mind. Nine whole weeks. But given how the government has persisted with its demonstrably false beliefs no matter how many times the experts tell them otherwise, will that happen?
Related Coverage
eSafety says tweeting commissioner will not qualify as a formal Online Safety Act requestUS Bill introduced to curb ‘big tech bullying’ in the app store spaceCanberra asks big tech to introduce detection capabilities in encrypted communicationNew ‘safety by design’ toolkit to help the global tech industry care a little bit more