Activity tagged "tech industry"

Posted:

New research from AWU/CWU/Techquity on AI data workers in North America. “[L]ow paid people who are not even treated as humans [are] out there making the 1 billion dollar, trillion dollar AI systems that are supposed to lead our entire society and civilization into the future,” says one.

We identify four broad themes that should concern policymakers:

Workers struggle to make ends meet. 86% of surveyed workers worry about meeting their financial responsibilities, and 25% of respondents rely on public assistance, primarily food assistance and Medicaid. Nearly two-thirds of respondents (66%) report spending at least three hours weekly sitting at their computers waiting for tasks to be available, and 26% report spending more than eight hours waiting for tasks. Only 30% of respondents reported that they are paid for the time when no tasks are available. Workers reported a median hourly wage of $15 and a median workweek of 29 hours of paid time, which equates to annual earnings of $22,620. 
 
Workers perform critical, skilled work but are increasingly hamstrung by lack of control over the work process, which results in lower work output and, in turn, higher-risk AI systems. More than half of the workers who are assigned an average estimated time (AET) to complete a task felt that AETs are often not long enough to complete the task accurately. 87% of respondents report they are regularly assigned tasks for which they are not adequately trained. 
 
With limited or no access to mental health benefits, workers are unable to safeguard themselves even as they act as a first line of defense, protecting millions of people from harmful content and imperfect AI systems. Only 23% of surveyed workers are covered by health insurance from their employer. 
 
Deeply involved in every aspect of building AI systems, workers recognize the wide range of risks that these systems pose to themselves and to society at large. Fifty-two percent of surveyed workers believe they are training AI to replace other workers’ jobs, and 36% believe they are training AI to replace their own jobs. 74% were concerned about AI’s contribution to the spread of disinformation, 54% concerned about surveillance, and 47% concerned about the use of AI to suppress free speech, among other issues.
We identify four broad themes that should concern policymakers: Workers struggle to make ends meet. Workers perform critical, skilled work but are increasingly hamstrung by lack of control over the work process, which results in lower work output and, in turn, higher-risk AI systems. With limited or no access to mental health benefits, workers are unable to safeguard themselves even as they act as a first line of defense, protecting millions of people from harmful content and imperfect AI systems. Deeply involved in every aspect of building AI systems, workers recognize the wide range of risks that these systems pose to themselves and to society at large.
Read:
In six months, the Trump administration has already withdrawn or halted enforcement actions against 165 corporations of all types – and one in four of the corporations benefiting from halted or dropped enforcement is from the technology sector, which has spent $1.2 billion on political influence during and since the 2024 elections.
Read:
From lending to payments to stock trading to crypto, prominent fintech businesses have found a competitive edge not in technology itself, but in using narratives about technology as a smokescreen for the profitable arbitrage of financial regulations. This modus operandi is encouraged by Silicon Valley’s venture capitalists, who decide which businesses to fund and often provide advice, gin up hype, and lobby for the businesses they’ve chosen. Our society continues to shower VCs with public subsidies, but as I argue in this brief post, if regulatory arbitrage is what we’re getting from Silicon Valley’s VCs in exchange, it’s well past time to reconsider this relationship. 
Read:
This model of media capture has since become a case study in soft authoritarian control. Its blueprint rests on four pillars: the takeover of public media, the political capture of the media regulator, the deployment of state funds as leverage over editorial content, and the strategic acquisition of private outlets. This formula has been successfully exported—with variations—to other countries. ... Efforts to manipulate the media are nothing new; history is littered with regimes that sought to bend the press to their will. What distinguishes this modern form of capture, however, is the role of the private sector. Corporations reliant on government contracts or regulatory leniency buckled under pressure, buying up media outlets and turning them into mouthpieces of state propaganda. In the digital age, media capture is often coupled with digital authoritarianism, where governments and non-state actors collaborate to use technologies to conduct surveillance, restrict access to information, and distort the journalistic ecosystem with authoritarian-friendly outlets and campaigns of disinformation.
Read:
When techies describe their experience of AI, it sometimes sounds like they're describing two completely different realities – and that's because they are. For workers with power and control, automation turns them into centaurs, who get to use AI tools to improve their work-lives. For workers whose power is waning, AI is a tool for reverse-centaurism, an electronic whip that pushes them to work at superhuman speeds. And when they fail, these workers become "moral crumple zones," absorbing the blame for the defective products their bosses pushed out in order to goose profits. As ever, what a technology does pales in comparison to who it does it for and who it does it to.
Read:
"Say I tell you that you have my permission to move a book I wrote (and am thus the copyright holder for) from your Kindle to another device. If the Kindle book has DRM, you're still not allowed to move it. The fact that I am the copyright holder has no impact on whether Amazon—a company that didn't create or invest in my book—can prevent you from moving that book outside of its walled garden...In fact, if I supply you with a tool to remove DRM (like some versions of Calibre), then I commit a felony and Amazon can have me sent to prison for five years for giving you a tool to move my book from the Kindle app to a rival app like Kobo," [Cory Doctorow] wrote.