As always, from the Biden administration’s new rules on the government’s use of AI to the Federal Trade Commission banning AI-generated customer reviews to the AI Game of Thrones character generated and 14-year-old boy led to his tragic suicide.
Below is some of the news. But to start thinking positively about how genetic AI can advance our understanding of almost anything, let’s start with: European scientists have devised an AI algorithm to interpret the sounds made by pigs. A kind of early warning system for farmers when they need to cheer up their livestock.
Reuters reported with the headline, “AI decodes oinks and growls to keep pigs happy,” and said the algorithm “could alert farmers to negative emotions in pigs” so farmers could intervene. reported that it can improve the well-being of animals. The news organization spoke to Elodie Mandel-Briefer, a behavioral biologist at the University of Copenhagen who is co-leading the effort.
Scientists from universities in Denmark, the Czech Republic, France, Germany, Norway, and Switzerland used “thousands of pig sounds recorded in a variety of scenarios, including play, isolation, and competition for food. They found that growls and chirps reveal positive or negative “emotions,” Reuters said. Mandelbriefer told the news agency that a good farmer would be able to keep an eye on the condition of the pigs just by looking at them in the cage, but current tools focus on the health of the pigs. He said he was in a condition.
“Animal emotions are central to animal welfare, but they are undervalued on farms,” she told Reuters.
If, like me, you’re a fan of Babe and Charlotte’s Web, you’ll probably nod and say that animals’ feelings should be given due consideration. And you realize that AI could help us, that even if we can’t exactly talk to animals like Doctor Dolittle or Eliza Thornberry, we can at least teach farmers about the inner lives of creatures. You might be thinking, isn’t it amazing that it has the potential to give you beneficial beads?
Considering that researchers have also used AI to decipher the sounds elephants make and concluded that elephants call each other by name, just like humans do, translation chatbots like Dolittle/Thornberry are I am optimistic that this will not be far off. Away.
Here are some other notable AI initiatives:
Apple Intelligence is heading our way in a way
Apple users will soon start seeing some of the Gen AI tools promised as part of the company’s Apple Intelligence rollout when the tech giant releases a software update for iPhones, iPads, and Macs this week.
iOS 18.1 comes with several Apple Intelligence features, including “AI-recommended writing tools that pop up in documents and emails, Photo tools with Cleanup to remove unwanted parts of images, and numerous changes to Siri.” CNET commentator Scott Stein said. reported by Patrick Holland. “The most notable changes to Siri include a new voice designed to sound more natural, the ability to understand the context of a conversation, a new glowing border around the display when Siri is running, and a new light at the bottom of the screen. Includes a new double-tap gesture on the Enter Siri screen.
There’s also a caveat, they added: “While some of Apple’s AI features seem really useful, some iPhones, iPads, and Macs (iPhone 15 Pro models and newer), as well as M-series chips, will be available later this year. It will be rolled out exclusively to Macs and iPads with Windows 7.0, which means it won’t be available to everyone. ”
Apple is seen as lagging behind rivals like Microsoft and Google when it comes to next-generation AI tools. What’s with Apple’s slow and limited AI rollout? Craig Federighi told The Wall Street Journal’s Joanna Stern that Apple is taking a cautious approach to AI generation because of its focus on privacy and the responsible use of artificial intelligence. he said.
“When you put something out there, there can be some kind of chaos,” Federighi told Stern. “Apple’s perspective is, ‘Let’s get each part right and release it when it’s ready.'”
Or maybe, as some have speculated, Apple is falling behind its rivals.
I’m waiting to try Genmoji. I believe this could be a creative way for Apple users to get used to writing Gen AI prompts. Click here for details.
FTC bans fake (AI-generated) reviews and testimonials
There are many articles on sites like Amazon and Yelp that question whether you should believe reviews and customer testimonials that claim to be written by average people. Now, the U.S. Federal Trade Commission has announced new rules prohibiting, among other things, “fake or deceptive consumer reviews, such as those generated by AI, that falsely appear to be from someone who does not exist,” which will take away consumers’ time. Aims to save money. Fake Reviews,” the FTC said in a release.
“Fake reviews don’t just waste people’s time and money, they pollute the market and protect people’s integrity,” FTC Chair Lina Khan said in an August release highlighting the rule that went into effect last week. “It diverts business away from competitive competitors.” The rules also prohibit the sale or purchase of online reviews.
CNET writer Samantha Kelly pointed out that the new rules will also apply to future reviews. “According to marketing platform Uberall, approximately 90% of people trust reviews when shopping online,” Kelly wrote. “It is unclear how the FTC will enforce this rule, but it may choose to pursue some high-profile cases to set an example. A fine of $1,744 can be sought.
If you believe a review is fake, you can report it to the FTC here.
FYI, all CNET reviews are conducted by human staff and, in accordance with our AI policy, we do not use AI tools to conduct the hands-on, product-based testing that informs our reviews and ratings. This is when you’re reviewing and evaluating an AI tool itself, and you need to generate example output for it, as we do within AI Atlas, a human-generated compendium of AI news and information. ”
For Elon Musk, AI imitation may be the most honest form of theft, lawsuit claims
The production company that brought you the movie Blade Runner 2049 is not keen on Elon Musk. Alcon Entertainment claims that Tesla used an AI-generated image to introduce Tesla’s robotaxi during Tesla’s debut in October, but that it was too similar to an image from the 2017 movie. I am doing it.
Alcon, which is suing Tesla, CEO Musk and film distributor Warner Bros., uses “iconic stills” from the movie to promote Tesla’s new CyberCab, according to Alcon’s 41-page lawsuit. (you can read it here).
“Alcon denied all permits and vehemently opposed any Defendants suggesting any affiliation between BR2049 and Tesla, Musk, or any Musk-owned company,” the complaint states. “The defendant then proceeded to do everything anyway, clearly using fake AI-generated images.”
The BBC reports that Musk has mentioned the original Blade Runner in the past and “at one point hinted that it was the inspiration for Tesla’s Cybertruck.” You can see the image.
Tesla and Warner Bros. did not respond to requests for comment from various media outlets. “Mr. Musk chose troll mode when responding to news of the lawsuit – said [in an X post] “That movie was terrible,” Variety reported, without specifying the specifics of the complaint. The Washington Post noted that Musk mentioned Blade Runner during the CyberCab reveal. That future,” Musk said. “I think we want that Duster.” [coat] He’s wearing, oh, but not the dark apocalypse. We want to have a fun and exciting future. ”
Tesla’s robotaxis event “We, Robot” prompted filmmaker Alex Proyas to point out the similarities in the design of the 2004 film “I, Robot,” based on the story by Isaac Asimov. It became. “Hey Elon, could you please give me back my design,” Proyas posted on X, which has been viewed 8.1 million times.
OpenAI whistleblower cites copyright concerns, Perplexity files lawsuit
Publishers and AI companies can enable creators of large-scale language models that generate AI chatbots (such as ChatGPT, Anthropic, and Perplexity) to collect content from the internet, including copyrighted material, to train their models. There is active conflict over whether or not. Publishers say no, and the New York Times is specifically suing OpenAI and Microsoft. AI companies operate under fair use guidelines and do not need to compensate or seek permission from copyright owners.
Last week, the Times reported that a former OpenAI researcher involved in collecting material from the Internet to provide to ChatGPT believed that OpenAI’s use of copyrighted data violated the law. It was reported that Suthir Balaji, who worked at OpenAI for four years, shared his concerns with the paper in a series of interviews. “ChatGPT and other chatbots are destroying the commercial viability of the individuals, companies, and internet services that created the digital data used to train these AI systems,” the Times reported.
“This is not a sustainable model for the entire internet ecosystem,” Balaji told the newspaper.
In response to Balaji’s claims, OpenAI reiterated that it collects content from the internet “in a manner protected by fair use.” And, coincidentally or not, OpenAI and Microsoft will each contribute $5 million in cash and technology services to fund a news project focused on AI implementation in five major metropolitan daily news organizations. Announced.
Meanwhile, Perplexity AI has been sued by the Dow Jones & Co., owned by media mogul Rupert Murdoch, and the New York Daily News. Media companies said the AI startup was making “massive illegal copies” of their copyrighted material. Reuters also noted that the NYT sent Perplexity a cease-and-desist letter earlier this month “requiring it to stop using the newspaper’s content for generative AI purposes.”
Following the Dow Jones lawsuit, Perplexity CEO Aravind Srinivas told Reuters he was “surprised” and said the company was open to discussions with publishers about licensing content. . This comes after Wired and Forbes accused Perplexity of plagiarizing content, prompting the AI search engine to launch revenue-sharing programs with publishers.
This is also worth knowing…
When asked for his thoughts on the technology, director Spike Lee called AI “scary.” Mr. Lee spoke as part of a lecture series sponsored by the Gibbs Museum. “I was in my hotel room looking at Instagram and they had this thing.” [with] The bottom third are saying it’s AI, but people are saying the exact opposite of what they are. That’s why I don’t know what’s going on and I’m scared. “It’s terrifying,” said Lee, the creative force behind such classic films as “Do the Right Thing” and “Malcolm X.” “I just think… sometimes technology can go too far.” A video of the event can be viewed on YouTube here. Commented at 57 minutes ago.
If you’re looking for lessons on how to use Gen AI tools, CNET contributor Carly Quellman offers a review of MasterClass’ three-part series on how to leverage AI.
The Biden administration plans to “ensure that the Department of Defense, intelligence agencies, and other national security agencies use and protect artificial intelligence technologies and how such tools are used in decision-making ranging from nuclear weapons to granting asylum.” released a memorandum outlining how to put “guardrails” in place regarding whether The New York Times reported. The memorandum can be read here.
Researchers at the University of California, Los Angeles, announced that they have developed an AI deep learning system that rapidly learns to automatically analyze and diagnose MRI and other 3D medical images. Its accuracy rivals that of medical professionals. time. ”