Activity tagged "artificial intelligence"

Posted:

New research from AWU/CWU/Techquity on AI data workers in North America. “[L]ow paid people who are not even treated as humans [are] out there making the 1 billion dollar, trillion dollar AI systems that are supposed to lead our entire society and civilization into the future,” says one.

We identify four broad themes that should concern policymakers:

Workers struggle to make ends meet. 86% of surveyed workers worry about meeting their financial responsibilities, and 25% of respondents rely on public assistance, primarily food assistance and Medicaid. Nearly two-thirds of respondents (66%) report spending at least three hours weekly sitting at their computers waiting for tasks to be available, and 26% report spending more than eight hours waiting for tasks. Only 30% of respondents reported that they are paid for the time when no tasks are available. Workers reported a median hourly wage of $15 and a median workweek of 29 hours of paid time, which equates to annual earnings of $22,620. 
 
Workers perform critical, skilled work but are increasingly hamstrung by lack of control over the work process, which results in lower work output and, in turn, higher-risk AI systems. More than half of the workers who are assigned an average estimated time (AET) to complete a task felt that AETs are often not long enough to complete the task accurately. 87% of respondents report they are regularly assigned tasks for which they are not adequately trained. 
 
With limited or no access to mental health benefits, workers are unable to safeguard themselves even as they act as a first line of defense, protecting millions of people from harmful content and imperfect AI systems. Only 23% of surveyed workers are covered by health insurance from their employer. 
 
Deeply involved in every aspect of building AI systems, workers recognize the wide range of risks that these systems pose to themselves and to society at large. Fifty-two percent of surveyed workers believe they are training AI to replace other workers’ jobs, and 36% believe they are training AI to replace their own jobs. 74% were concerned about AI’s contribution to the spread of disinformation, 54% concerned about surveillance, and 47% concerned about the use of AI to suppress free speech, among other issues.
We identify four broad themes that should concern policymakers: Workers struggle to make ends meet. Workers perform critical, skilled work but are increasingly hamstrung by lack of control over the work process, which results in lower work output and, in turn, higher-risk AI systems. With limited or no access to mental health benefits, workers are unable to safeguard themselves even as they act as a first line of defense, protecting millions of people from harmful content and imperfect AI systems. Deeply involved in every aspect of building AI systems, workers recognize the wide range of risks that these systems pose to themselves and to society at large.
Posted:

Someone should probably inform the White House's "AI & Crypto Czar" that no one is forcing AI companies to train their models on Wikipedia

Tweet by David Sacks: "Wikipedia is hopelessly biased. An army of left-wing activists maintain the bios and fight reasonable corrections. Magnifying the problem, Wikipedia often appears first in Google search results, and now it’s a trusted source for AI model training. This is a huge problem."

You would think the obvious solution to "the volunteer-powered project we all train our AI models on for free isn't adequately twisting reality to our political views" would be "... and so we stopped training on it" and not "... and so we will force the volunteers to bend to our will"

Read:
too many people are doing a great disservice to their writing by garnishing it with generative-ai (artificial intelligence) - ethics and values aside (lol), it looks tacky and it cheapens the words around it. there are so many human-created, realistic, and beautiful images available for you to use on your blogs, websites and projects for free. the following is a list that i believe just scratches the surface of what's available out there.
Read:
I think there are three distinct things go on here, each of them interesting in their own right but hard to disentangle: 1. This has all the hallmarks of a moral panic. ... 2. As far as I can tell from reading news articles and forum threads this is really an extension of the "LLM sycophancy" discourse that's been ongoing for a while now. ... 3. BlueSky user Tommaso Sciortino points out that part of what we're witnessing is a cultural shift away from people fixating on religious texts during mental health episodes to fixating on LLMs.
Read:
It begins to feel like a broad celebration of mediocrity. Finally, society says, with a huge sigh of relief. I don’t have to write a letter to my granddaughter. I don’t have to write a three-line fetch call. I don’t have to know anything, care about what I’m doing, or even have an opinion. I can just substitute some Content™. I can just ask the computer for Whatever But I like programming. I like writing. I like making things and then being able to sit back and look at them and think, holy fuck, I made that. There is no joy for me in typing a vague description into a computer and refreshing my way through a parade of Whatever until something is good enough.