Swifties, take a stand! Fans of Taylor Swift denounce AI-generated explicit Deepfakes! Appearing as a unified front against intrusions of digital privacy, Taylor Swift’s fan base denounces the proliferation of AI-generated sexual photographs.
Taylor Swift would say this: Hey, it’s not me; AI is the issue, not me.
President Joe Biden is among many who have issued numerous cautions against the use of generative AI to edit audio and video in order to produce deepfakes, which depict people—politicians included—saying or acting in ways that they did not actually say or do.
If you’re one of those people who thinks, “Phooey, those concerns are just overblown,” take a look at three recent instances of deepfakes featuring Biden, X owner Elon Musk, and performer Taylor Swift.
Swifties are aware of her love with Le Creuset cookware.”Her assortment of kitchenware has been highlighted in her present selections at a fan’s marriage shower, on a Tumblr page devoted to the pop star’s interior design, and in a Netflix documentary that was highlighted by Le Creuset’s Facebook page,” according to The New York Times.
However, her passion for vibrant enameled cookware did not inspire her to promote the expensive pots and other items in advertisements that appeared on Facebook and Tiktok. Swift purportedly told her admirers in the AI-generated advertisements that she was “thrilled” to give away cookware sets to those who answered a few questions before attempting to con them with the real fraud. The advertisements used her voice and likeness.A’small delivery cost of $9.96′ was requested from the participants in exchange for the cookware. Those who followed through were slapped with surreptitious monthly fees without ever getting the promised cookware.”
In Musk’s case, a bogus depiction of the billionaire businessman was seen advertising on Facebook to Australians hoping to become “rich quick” about a sham stock trading scam called Quantum artificial intelligence. The deepfake Elon Musk is heard in a video purporting to be a news story declaring, “The latest platform, Quantum AI, will help people get rich quick, not work for every penny.” Before the reporter instructs viewers to “make a minimum investment of $400” on the Quantum AI website, the deepfake Musk also seems to criticize other billionaires, including Jeff Bezos, Bill Gates, and Richard Branson, as “‘prominent shareholders,'” according to RMIT News.
In response to the phony Le Creuset advertisements produced by AI
Unfortunately, the use of celebrities’ voices and pictures to deceive others is nothing new; every year, scammers defraud people out of billions of dollars. According to the Federal Trade Commission, fraud cost consumers up to $8.8 billion in 2022, and that was before the development of advanced artificial intelligence
Aside from Taylor Swift and Musk, con artists have also made a phony Tom Hanks promoting dental plans, impersonated renowned chef Gordon Ramsay in an identity theft plot, and impersonated Oprah Winfrey to promote keto gummy bear pills. However, modern artificial intelligence technology, such as text-to-audio and text-to-video converters, makes it quite simple for con artists to swiftly produce deepfakes that appear authentic. In April of 2023, the Better Business Bureau warned consumers to be wary of celebrity endorsements because “ever-improving artificial intelligence technology, [makes] these phony endorsements are more convincing than ever.”
Be cautious—many of these celebrity deepfakes are widely disseminated on social media platforms, according to the BBB. If you’ve fallen victim to a scam or been the target of one, the bureau encourages you to report it here.
Regarding elections, a day before the state’s primary on January 23, the New Hampshire Department of Justice released an advisory after someone disseminated a robocall purporting to be Biden, urging listeners not to cast ballots in the state’s presidential primary.
The con artist then instructed anyone who had received the robocall to contact the scammer’s number in order to “be removed from the calling list,” presumably so that their number would be added to their database for more deception and frauds in the future. .. The robocall was described by the state attorney general’s office as an attempt to “suppress New Hampshire voters,” which is exactly what it is.Funny until someone loses their democracy, that is.
These are some further AI projects that are worth checking out.
As the ROI isn’t there yet, AI won’t replace humans in every job.
Researchers at the Massachusetts Institute of Technology’s Computer Science and Artificial Intelligence Laboratory found that, at least for now, replacing humans with AI is not cost-effective across a range of industries in their most recent study on how artificial intelligence may or may not alter the future of employment.
“While there is already evidence that AI is changing labor demand, most anxieties about AI flow from predictions about ‘AI Exposure’ that classify tasks or abilities by their potential for automation,” the five investigators said. (This is the report.) “The previous literature on ‘artificial intelligence Exposure’ cannot predict this pace of automation since it attempts to measure an overall potential for AI to affect an area, not the viability from a technical and financial standpoint of developing such systems.”
After researching the potential employment implications of computer vision developments, they came to the conclusion that just 23% of worker wages paid for vision tasks would be appealing to automate, and that most vision tasks with “artificial intelligence Exposure” would not be chosen for automation at today’s rates by US enterprises.
In my opinion, the key takeaway is that the notion that artificial intelligence will soon be able to replace labor is merely wishful thinking. The MIT researchers predict that during the next six years, “AI job displacement will be substantial, but also gradual — and therefore there is room for policy and retraining to mitigate unemployment impacts.”
Mark Zuckerberg promotes open-source artificial intelligence models.
CEO Mark Zuckerberg discussed Meta’s AI investment and why he believes other businesses should release their technology under the same license that his firm did with its Llama big language model in an interview with tech insider site The Verge.
The topic of discussion was developing artificial general intelligence, or creating a machine that could perform any task performed by humans, maybe even better. That is not the same as generative AI. (See the explanations below.)
Regarding the definition of AGI: I don’t have a succinct, one-sentence definition. It’s open for debate whether general intelligence is similar to human intelligence, human intelligence plus, or some kind of superintelligence from the far future.
Amy Robach Disconnected Emotionally With T.J. Holmes At “GMA3’s Podcast”
2024 Oscar Nominations: View the Entire List Here
Common And Jennifer Hudson’s Romantic Date Became The Subject Of Rumor For Fans
However, the range of intelligence—that is, the fact that it encompasses so many diverse capacities—is what really interests Zuckerberg. You need to be able to reason and have intuition, among other things. He went on to say, “I’m not actually that sure that some specific threshold will feel that profound.”
Regarding the talent wars for AI:
“We’re used to extremely intense talent wars. However, the dynamics are different here, with several businesses vying for the same clientele and a large number of investors and individuals funding various initiatives, which facilitates people launching separate businesses on the outside.”
Regarding the ownership of artificial intelligence and the necessity of making AGI models, such as Meta’s Llama, open source: “I believe that one of the major issues here will be that if you create anything truly valuable, it will eventually become highly concentrated. On the other hand, increasing transparency solves a wide range of problems that may arise from uneven access to opportunities and value. That’s why it’s crucial to the open-source concept.”
Regarding industry players that abandoned open source and are now advocating for artificial intelligence regulation: “There were a lot of these firms that were open in the past; they published all of their work and talked about how they were going to open-source it.Zuckerberg said, “I think you see the dynamic of people just realizing, ‘Hey, this is going to be a really valuable thing, let’s not share it.”
“The biggest companies who first had the most leads are also often the ones requesting that stringent guidelines be placed on how other
companies develop artificial intelligence. Although I’m sure that some of them have valid worries about their safety, it’s amazing how closely their concerns align with the plan.”
How AI is altering the way we pose health-related questions
Put your hand up if you have ever used Google to determine a medical diagnosis. We can anticipate using ChatGPT and other technologies even more as a result of AI to find the answers to our health-related queries.
For better or worse, artificial intelligence is altering the way we research human health, as CNET’s Jessica Rendall explains. The term used by the researcher to describe the behavior of people looking up symptoms online prior to seeing a physician, “Dr. Google,” is raised by the way ChatGPT “can quickly synthesize information and personalize results.” Frequently, we refer to it as “self-diagnosing,” according to Taylor Swift.
Artificial intelligence (AI) has the potential to be a game changer for those with persistent and occasionally enigmatic health disorders for whom there have been multiple failed attempts at diagnosis.can analyze a list of symptoms and offer potential reasons.
However, there are a few issues. The most significant is that AI tools have the potential to cause delusions, or to present you with information that appears accurate but isn’t. A further risk is “the possibility you could develop ‘cyberchondria,’ or anxiety over finding information that’s not helpful, for instance diagnosing yourself with a brain tumor when your head pain is more likely from dehydration or a cluster headache,” Rendall stated.
Nevertheless, ChatGPT can be useful in converting medical jargon into understandable English so that patients and physicians can communicate more effectively. According to Rendall, “Arguably the best way to use ChatGPT as a ‘regular person’ without a medical degree or training is to make it help you find the right questions to ask.”
With ChatGPT’s assistance, the literary prize for “Flawless” is won.
The Times reports that proponents of genetically modified AI, who believe the technology might elevate human accomplishment and propel humanity to unprecedented heights, celebrated this week’s victory of a Japanese writer who took home a coveted literary award with a book that one judge called “flawless.”
How did Rie Kudan accomplish such perfection? Her masterpiece, The Tokyo Tower of Sympathy, won the Akutagawa Prize. Kudan claimed that ChatGPT had a part in it. The 33-year-old novelist revealed at an awards ceremony last week that roughly 5% of her book was written by the well-known chatbot from OpenAI, which was quoted verbatim in the book, according to The Telegraph.
“Set in a futuristic Tokyo, the book revolves around a high-rise prison tower and its architect’s intolerance of criminals, with artificial intelligence a recurring theme,” said The Daily Mail. According to the Telegraph, “It centers around an architect who designs a comfortable high-rise prison, but finds herself struggling in a society that seems excessively sympathetic to criminals.”
According to The Telegraph, Kudan claimed that she confides in ChatGPT with her deepest emotions, including feelings she claims she would never discuss with anyone else. She said that the platform’s comments “sometimes inspired dialogue in the novel.”
Not all writers share Rudan’s enthusiasm for using generative artificial intelligence in their writing. Novelists like John Grisham, George R.R. Martin, Jodi Picault, and Scott Turow are represented by the Authors Guild, which filed a lawsuit against OpenAI in September and revised its complaint in December.
Furthermore, celebrated author Salman Rushdie has stated that he believes artificial intelligence tools would take a long time to match the creativity of human writers. He said at a literary event in October that 300 words written in his manner were produced by an artificial intelligence, “and what came out was pure garbage.”
“The greatest writers, the best writers have a vision of the world that is personal to themselves, they have a kind of take on reality which is theirs and out of which their whole sensibility proceeds,” Rushdie stated to the Big Think. “Now to have all that in the form of artificial intelligence — I don’t think we’re anywhere near that yet.”
Non-AI generative AI model
One artist is drawing with a pen in response to requests.The “non-AI generative AI model” was developed by New York graphic designer Pablo Delcan as a clever parody on text-to-image converters and AI prompts. Prompt-Brush 1.0 is a website where you may submit a text prompt and Delcan will draw your concept in black and white and return it to you.
A smiling old man, a grim reaper irritated with his laptop, a UFO beaming up a slice of pizza, a gray-and-white tuxedo cat, and a smiling old man are just a few of the concepts that were submitted and wonderfully rendered by Delcan.According to It’s Nice That, he has requests for over 1,000 photographs in the queue and has released a selection of the more than 631 images he has generated.
It’s Nice, Delcan said. He claims that each drawing takes him around a minute to complete and that, having spent the previous year “immersed in the world of artificial intelligence, this seemed like a way to poke some fun at that.” The “site metrics” he offers and the way he describes the “technology” underlying his service both display his sense of humor: “To draw, one dips a brush into black ink and moves it across paper to leave marks.” Thick lines are created by applying more pressure than thin ones. By joining these lines, a variety of drawings can be created.”
I’ve made my request and will update once I should receive an original Delcan.
This week’s AI term: AGI
The pinnacle of artificial intelligence is artificial general intelligence, or AGI: a machine that is capable of performing any task that a human can perform, maybe even better. What distinguishes an AGI from, say, general-purpose AI models like ChatGPT? In my mind, ChatGPT is a technology that simulates or anticipates human reactions. It answers questions like an autocomplete on steroids, but AGI is more like JARVIS from Iron Man or HAL from 2001: A Space Odyssey.
Below are some definitions of artificial general intelligence (AGI), which does not currently exist on Earth. To truly understand how hard all of this is, read through all of these and then take a look at the last paragraph from Google Deepmind’s description that follows.
AGI, generative AI, and AI are contrasted by Luce Innovative Technologies:
“The term artificial intelligence (AI) refers to the discipline as a whole; generative AI concentrates on producing new content, while general AI seeks to create artificial intelligence systems that can perform a range of cognitive tasks on par with humans. The long-term objective of general artificial intelligence, sometimes referred to as artificial general intelligence (AGI) or artificial super general intelligence (ASI), has not yet been fully realized.”
AGI is defined as “a form of artificial intelligence that possesses the ability to understand, learn, and apply knowledge across a wide range of tasks and domains” by market research firm Gartner. It integrates general problem-solving abilities, cognitive flexibility, and adaptability, and may be used to a far wider range of use situations.”
According to IBM, “strong artificial intelligence, sometimes referred to as general AI or artificial general intelligence (AGI), is a theoretical subset of AI that characterizes a particular approach to artificial intelligence research. Strong AI would require a machine with an intelligence comparable to that of humans; it would have a self-aware awareness with the capacity to learn, solve problems, and make future plans.”
To sum up, AGI is “an important and sometimes controversial concept in computing research, used to describe an artificial intelligence system that is at least as capable as a human at most tasks,” according to Google Deepmind. The idea of artificial general intelligence (AGI) has moved from being a topic of philosophical discussion to one with immediate practical significance due to the quick development of machine learning (ML) models. Some experts expect AI will exceed humans in most areas in roughly ten years; some even claim that existing LLMs are AGIs.
Others feel that there are already “sparks” of AGI in the newest generation of large language models (LLMs). However, you would probably get 100 similar but differing answers if you asked 100 AI professionals to define what they meant by “AGI.”