A depiction of artificial intelligence controlling humans and vice versa.
getty
Anthropic on Tuesday released two updated versions of its Claude AI large-scale language model. Perhaps more interestingly, we also enabled extensions that allow users to grant LLM access and control certain aspects of their personal computers.
“A huge amount of modern work is done through computers, and the ability for Claude to interact directly with computer software in the same way a human would do would enable a wide range of applications not possible with the current generation of AI assistants. ,” a company spokesperson wrote in an email exchange.
Anthropic has named this new computer navigation feature for Claude “Computer Use.” Although LLM itself is not trained using user data, the announcement states that this is a new approach to training AI computer navigation with the help of the public.
“We teach general computer skills and provide access to a wide range of standard tools and software programs designed for humans. Developers can use this early capability to improve repetitive processes. , build and test software, perform research, and an unlimited number of other tasks,” the statement reads.
Claude AI computer usage feature description
The company’s announcement included a demo video showing the software’s various capabilities in responding to input task requests from humans. In the demo, once Claude was downloaded to your computer, you were able to:
It takes multiple screenshots of your computer screen and analyzes the images based on your query requests. You can then search for the relevant information requested on the demo computer. If the requested details were not found on the open file, other possible databases or sources were accessed as directed by the human-generated request. Once the information is found, LLM will automatically fill in the relevant details in the third-party form/application open on your screen. Finally, the Claude computer usage model went ahead and automatically sent the completed document.
Once the user established the first request query on their desktop, all steps were completed without user intervention.
This is important because this is a significant leap forward in AI automation access for the average individual and could further accelerate the adoption of AI agents that complete tasks autonomously in the future.
Here’s how it actually works:
Anthropic’s announcement states that the new utility will be available free of charge to anyone interested in actively accessing Claude’s application programming interface to take advantage of these features and granting LLM access to their personal computer. It is being
Anthropic reiterated several times in a statement that because the software is so new, users can expect to make some mistakes early on and be a little buggy.
“While this ability is expected to improve rapidly in the coming months, Claude’s current ability to use computers is incomplete. Some actions that people can easily perform, such as scrolling, dragging, and zooming, are currently This has been a challenge for Claude, and we encourage developers to start exploring with lower-risk tasks,” the statement reads.
Potential risks of Claude AI’s computer use
While this AI advancement has generated a lot of excitement among AI influencers and developers across social media, there are inherent risks that can occur, Anthropic noted in a post.
“Computer use can introduce new vectors for more familiar threats such as spam, misinformation and fraud, so we are taking a proactive approach to promoting safe adoption. We have developed a new classifier that can identify when a computer is being used and whether harm is occurring. Learn more about the research process behind this new skill and learn more about our safety measures. , you can read in the post about the development of computer use,” the company writes.
The company’s statement specifically states that this upgrade is aimed at “developers,” but in reality, if you go to Replit.com and get a subscription, you can search for YouTube and download the API. Anyone who can cut and paste the key will have access to Claude’s computer. Use the features.
All users need to do is follow the bulleted steps provided under the “Getting Started” subheading in the Replit screenshot below. Here are the steps:
ANTHROPIC_API_KEY[Secrets]Add it to the pane.[Run]Click.[Output]See the AI in action in the pane. Send commands to the AI in the webview.
Clones the interface for accessing the cloud computer.
Riplit Website October 23, 2024
In an email exchange, an Anthropic representative highlighted the “many safeguards and protections” outlined in a separate blog post regarding Claude’s computer use. The email excerpt clearly states the following precautions the company has taken:
Requires implementation by the developer: The developer can use computer usage features to enable Claude to take a screenshot or perform certain commands on the computer (or virtual computer), such as moving the cursor or entering text. You will need to build additional tools to do so. Claude cannot take these actions alone. Data privacy principles: Following our standard approach to data privacy, by default we do not train our generative AI models on user-submitted data, including the screenshots Claude receives. Anthropic only receives screenshots and related computer instructions (user prompts and output) from API Customers. We do not collect any other data from your computer. Additional safeguards: We have developed new classifiers and rapid analysis tools to identify potential misuse of computer usage features. Gradual rollout: We intentionally chose not to release this feature on Claude.ai right away. Instead, we are launching a public beta for developers via our API. This allows us to gather valuable feedback and ensure responsible deployment before considering widespread deployment.
Despite these actions, the company also lists the following warning to potential users on its API implementation page: This is a pretty solemn warning.
A human warning to potential users of cloud computers.
Anthropic website 10.23.24
The fourth point is particularly relevant as it states that anyone using this feature must ensure that they understand that the AI needs to: Masu.
“…forces humans to review decisions that can have meaningful real-world consequences, as well as tasks that require active consent, such as accepting cookies, performing financial transactions, and agreeing to terms of service.” Please make a request.”
Needless to say, developers and general users should proceed with caution during this beta phase of using Claude computers.
“We will continue to monitor and refine our safety measures as we gather more data and feedback from developer betas,” the spokesperson concluded.