How Twitter hired tech’s biggest critics to build ethical AI
A truly laudable exercise and process, by Twitter’s Dorsey. Not only has he chosen to listen to his biggest critics, but is integrally weaving them into the decision making of algorithmic development. Two points from this article stand out:
- “Beyond user choice and public transparency, Chowdhury’s goal is to create a system of rules and assessments that function like government over the models: a system that could prevent harms from occurring, rather than just address the causes after people are hurt.” – in other words, a governance framework based on anticipating and preventing unintended consequences, something we consistently advocate for at CFU
- “Their work is democratic, not authoritarian. “There’s a life cycle to enacting change,” Williams explained. “You have to focus on enhancement; your first iteration or two is more on monitoring than it is on auditing. This as a concept is so new that focusing very directly on discipline and enforcement, you can’t really drive change through fear.”” – A fascinating approach to change theory: inviting change, rather than mandating it, and a model for nascent Digital Age ESG
Twitter in India Faces Criminal Charges for Kashmir Map ‘Treason’
Social media platforms are increasingly being pressurised to accommodate governments’ messaging priorities. Not only is this being applied to what can be posted – and taken down – on their platforms, but is now increasingly extending to walking a legal fine line that has personal liability repercussions to their employees.
For those platforms that do develop ethical standards, standing up to interventionist governments is proving expensive and dangerous. One can see why it might be easier to go with the flow, acceding to legal requests without the headache of pesky ethics standards. The increasingly prevailing counterargument however, is that the social license to operate will require such standards, considering the now well documented harms resulting from doing nothing.
Facebook’s Value Tops $1tn After Judge Dismisses US Lawsuits
While this decision looks to be favourable to Facebook on the face of it (and in the short term), in reality it only builds up the pressure for congress to enact tougher laws that make such cases more easily adjudicated.
One can imagine that the resubmission of the complaint, with well researched arguments, will play to Khan’s strengths, since she will be responsible for rewriting it – albeit on very tight timeframe. “Meanwhile, the House judiciary antitrust subcommittee advanced six bills last week that would overhaul antitrust laws.”
On a broader note, any new laws passed will have to take into account the transitional nature of the time we’re in, and that definitions today might not be relevant tomorrow, just like the existing ones are now out of date. How do we create a legislative and regulatory process that is agile, and mirrors that of a fast moving Digital Age?
NATO Launches €1bn Fund to Invest in War Startups
One hopes that ethical standards will be taken into account for these investments, to prevent a race to the bottom with China, especially considering that:
- “The disruptive technologies that NATO is interested in include artificial intelligence, data and computing, autonomy, quantum-enabled technologies, bio-engineering, human enhancements, hypersonic technology and space.” All of which have the potential to permanently alter and impact societies
- There is sufficient time to take ethical considerations into account in view of the fact that: “The fund and accelerator — called the Defence Innovation Accelerator for the North Atlantic (or DIANA) — are not likely to be operational until 2023, given the heavy bureaucracy involved in coordinating among the 30 NATO members”
- There are conflicting results from the use of such technologies by governments to date, as is evinced from coverage of Federal use of facial recognition: “However, 13 agencies were not aware of what non-federal systems were being used by employees, meaning that the agencies had not fully assessed the risks related to such systems, including privacy and accuracy.”
App Taps Unwitting Users Abroad to Gather Open-Source Intelligence
This investigation underlines the dispersed nature of decentralised surveillance we have been highlighting in prior newsletters.
In this instance “About half of the company’s clients are private businesses seeking commercial information, Premise says. That can involve assignments like gathering market information on the footprint of competitors, scouting locations and other basic, public observational tasks. Premise in recent years has also started working with the U.S. military and foreign governments, marketing the capability of its flexible, global, gig-based workforce to do basic reconnaissance and gauge public opinion […] As of 2019, the company’s marketing materials said it has 600,000 contributors operating in 43 countries, including global hot spots such as Iraq, Afghanistan, Syria and Yemen.”
This example also underlines our concern – previously reported on space focussed tech firms – regarding the growing blurring of lines within private enterprises, between commercial and government contracts, with ramifications for their shareholders: “Premise is one of a growing number of companies that straddle the divide between consumer services and government surveillance and rely on the proliferation of mobile phones as a way to turn billions of devices into sensors that gather open-source information useful to government security services around the world.”