Next, they must formulate a research question and design and conduct an experiment in pursuit of an answer.
Then, they must analyse and interpret the results of the experiment, which may raise yet another research question.
Can a process this complex be automated? Last week, Sakana AI Labs announced the creation of an “AI scientist” – an artificial intelligence system they claim can make scientific discoveries in the area of machine learning in a fully automated way.
Why instead od creating actualy usefull tools to help human sciencists, we are creating their shitty AI versions? Is it some effort to stop AI hype train from derailing?
Ooof, this is not going to end well. I was optimistic about AI like everyone else but hype upon weakly-good arguments upon humiliating disaster just … doesn’t make me confident.
I believe in human scientists to get around the flaws here but only SOME human scientists can I believe in 🙄🤷♂️
well, up for a bunch of bs papers
What about the peer review process?
Using generative large language models (LLMs) like those behind ChatGPT and other AI chatbots, the system can brainstorm, select a promising idea, code new algorithms, plot results, and write a paper summarising the experiment and its findings, complete with references.
Wow, sounds even less useful than navel gazing internet comments. Can’t wait for scientific papers to have the exact same problems that a Google search does, where you’re inundated with barely relevant AI slop.