January 18, 2025

Shadow Tech

Tech Innovations Unleashed

GPT-4 can hack websites without human help

GPT-4 can hack websites without human help

Hello Nature readers, would you like to get this Briefing in your inbox free every week? Sign up here.

Illustration picture shows the ChatGPT artificial intelligence software being used on a laptop surrounded by books.

In a 2023 Nature survey of scientists, 30% of respondents said they had used generative AI tools to help write manuscripts.Credit: Nicolas Maeterlinck/Belga MAG/AFP via Getty

Since its release in November 2022, OpenAI’s chatbot has helped scientists boost their productivity when it comes to writing papers or grant applications. But increasing the throughput of publications could stretch editors and reviewers even thinner than they already are. And, while many authors acknowledge their AI use, some quietly use chatbots to churn out low-value research. “We have to go back and look at what the reward system is in academia,” says computer scientist Debora Weber-Wulff. What might be needed is a shift from a ‘publish or perish’ culture to a system that prioritizes quality over quantity.

Nature | 7 min read

An algorithm can tell where a mouse is and where it’s looking by interpreting the animal’s brain activity. In effect, it works like a map app, explains neuroscientist Adam Hines: “You have positional information (the drop pin) aligned with direction (blue arrow), and during navigation the two are constantly updating as you move. Grid cells [in the mammalian brain] are like the GPS and heading cells are like a compass.” Incorporating the way the brain processes spatial data into an algorithm could eventually help robots to navigate autonomously, mathematician and study co-author Vasileios Maroulas says.

New Scientist | 3 min read

Reference: Biophysical Journal paper

OpenAI’s GPT-4 can be tuned to autonomously hack websites with a 73% success rate. Researchers got the model to crack 11 out of 15 hacking challenges of varying difficulty, including manipulating source code to steal information from website users. GPT-4’s predecessor, GPT-3.5, had a success rate of only 7%. Eight other open-source AI models, including Meta’s LLaMA, failed all the challenges. “Some of the vulnerabilities that we tested on you can actually find today using automatic scanners,” but those tools can’t exploit those weak points themselves, explains computer scientist and study co-author Daniel Kang. “What really worries me about future highly capable models is the ability to do autonomous hacks and self-reflection to try multiple different strategies at scale.”

The Register | 7 min read

Reference: arXiv preprint (not peer reviewed)

Infographic of the week

A world map showing wheat-growing areas colour-coded according to their ammonia emissions. Large swathes of central and southern Africa appear in blue and green (low emissions) while most of Europe, southwest and southeast Asia appear in yellow and red (high emissions).

(Peng Xu et al/Nature)

Harmful ammonia emissions from wheat, rice and maize cultivation could be slashed by 38%. A detailed map of ammonia — created by training a machine learning model on field measurements, and environmental and agronomic data — allowed researchers to explore how better fertilizer management and soil preparation could reduce emissions. (Nature Research Briefing | 6 min read, Nature paywall)

Reference: Nature paper

Features & opinion

“Classical robotics is very brittle because you have to teach the robot a map of the world, but the world is changing all the time,” says Naganand Murty, co-founder of a garden landscaping-robot company. Large language models (LLMs) could give robots the ‘brains’ to deal with unforeseen circumstances and even imbue them with a higher level of understanding. “For example, you can say, ‘I didn’t sleep well last night. Can you help me out?’ And the robot should know to bring you coffee,” explains roboticist Fei Xia. Sceptics argue that smart robots could be dangerous if they misunderstand a request or fail to appreciate its implications. “There are roboticists who think it’s actually bad to tell a robot to do something with no constraint on what that thing means,” says computer scientist Jesse Thomason.

Scientific American | 17 min read

From automated literature reviews to ‘self-driving’ labs, AI systems could make research more exciting, more accessible, faster and, in some ways, unrecognizable, argues former Google chief executive Eric Schmidt. With sensible regulation and proper support, AI can transform the scientific process, he says. “Young researchers might be shifting nervously in their seats at the prospect,” Schmidt says. “Luckily, the new jobs that emerge from this revolution are likely to be more creative and less mindless than most current lab work.”

MIT Technology Review | 12 min read

“Tokens are the standard unit of measure in LLM Land,” says software entrepreneur Dharmesh Shah. AI systems break up words into building blocks, partly to more efficiently convert them into numerical representations. For example, GPT breaks up the word ‘don’t’ into don and ’t. Treating ’t as its own entity means four words — can, can’t, don and don’t — can be described with just three tokens: can, don and ’t. You can see the process yourself using OpenAI’s Tokenizer.

Simple.AI blog | 4 min read

Quote of the day

Stefano Soatto from Amazon Web Services’ AI division likens self-learning techniques — using machines to train machines — to buttering a slice of bread: the initial training data is the pat of butter in the centre. Self-learning spreads it around evenly, making the meal tastier but not fundamentally different. (The Atlantic | 10 min read)

Today, I’m amused that some AI models seem to get better at maths when they are asked to adopt the persona of a Star Trek captain. Why? That’s “the $64 million question”, says machine-learning engineer Rick Battle, who co-wrote a preprint about these ‘eccentric automatic prompts’.

Your ‘Captain’s Log’ entries about this newsletter are always welcome at [email protected].

Thanks for reading,

Katrina Krämer, associate editor, Nature Briefing

With contributions by Flora Graham

Want more? Sign up to our other free Nature Briefing newsletters:

• Nature Briefing — our flagship daily e-mail: the wider world of science, in the time it takes to drink a cup of coffee

• Nature Briefing: Anthropocene — climate change, biodiversity, sustainability and geoengineering

• Nature Briefing: Cancer — a weekly newsletter written with cancer researchers in mind

• Nature Briefing: Translational Research covers biotechnology, drug discovery and pharma

link

Leave a Reply

Your email address will not be published. Required fields are marked *

Copyright © All rights reserved. | Newsphere by AF themes.