Crisp Insights into Digital Age ESG Developments – July 8, 2021

Didi’s Removal From China’s App Stores Marks a Growing Crackdown

 

This is significant news for at least three reasons:

 

  1. It conclusively confirms the breakdown of global trust and the accompanying balkanisation of business activities related to algorithms and data – in other words, most growth businesses these days. There has been considerable build up to this, as we’ve highlighted over the past 24 months, with this decision proving a key milestone
  2. China is flexing its muscle and implicitly teaching overseas investors in Chinese companies an important lesson: not to value firms doing business in China in a similar manner to companies with less interventionist governments. We wonder what the risk premium will get pegged at
  3. The timing seems to have been deliberate, meant to inflict maximum damage to investors and the listing. Considering China’s desire to foster domestic capital markets and strengthening of yuan as a reserve currency, the timing could be seen as message to listing exchanges that encourage Chinese listings, future hopefuls, as well as international investors: if you want a slice of the Chinese success story, you need to come to China and play by its rules…no shortcuts, no proxies

    Read More

     

Opioid Addiction Apps Access Personal Data, Study Finds

 

Studies such as this one, don’t exactly burnish the credentials of tech companies for robust self-governance. Instead they highlight the unethical practices and dangers from lack of oversight by their boards, let alone regulators.

 

The information gathered by these apps is well outside the approved data collection for in-person appointments and demonstrates how laws and regulations have yet to catch up to technologies’ ubiquitous presence throughout consumers’ lives: “If I go meet a counsellor and talk about my treatment needs, I’m not worried about sharing my location data for the last 48 hours,” she said. “That’s a big distinction highlighted in the report. These apps exist outside the regulatory framework and are collecting such a different scope of information.”

 

Read More

ExxonMobil Lobbyists Filmed Saying Oil Giant’s Support for Carbon Tax a PR Ploy

 

This exposé underlines the dangers of corporate lobbying activities, especially those at odds with public statements and values. It also underscores our premise, in Governance REbooted, that radical transparency is here to stay with the Digital Age.

 

The impacts on a firm’s brand and intangible value, especially tech firms – should they be similarly involved – would be even more damaging. Investors and their boards ought to be aware and review their companies’ lobbying activities with considerable caution.

 

Read More

This Manual for a Popular Facial Recognition Tool Shows Just How Much the Software Tracks

 

The mission creep in Facial Recognition is a growing concern. Be it directed at children in schools, employees in the workplace or consumers in stores, the amount of data gathering, triangulation and monitoring is creating a slippery slope on the privacy front.

 

So far, regulation is running behind in most countries and even Europe’s call for a facial recognition ban in public spaces doesn’t really address this creep. As with typical trade-offs in tech, the question is: how far are the billions of general public citizens prepared to accept privacy intrusion, in return for protection beyond that provided by current systems?

 

These kinds of conversations ought to be taking place before enterprises implement facial recognition monitoring.

 

Read More

YouTube Algorithm Keeps Recommending ‘Regrettable’ Videos

 

As a first, external audit of the number of objectionable videos on YouTube, Mozilla’s (a non-profit) conclusions are sobering.

 

It’s especially of concern considering YouTube’s huge popularity with children: “YouTube’s algorithm, which it uses to recommend hundreds of millions of hours of videos to users every day, is a notorious black box that researchers and academics have so far been unable to access [….]YouTube has been criticized for providing minimal evidence or data to back up its claims of detoxifying the algorithm.”

 

If Youtube provides neither transparency into its AI algorithm, nor a verifiable trail of how its purported actions are improving the proliferation of harmful content, what is the recourse? Especially considering these findings?

 

Read More