You are viewing a single thread.
View all comments View context
1 point

I agree with you. I just wanted to share some nuance. The point I wanted to make is that it is in fact possible to incorporate LLMs in a fairly controlled way while calculating (estimates of) the risk of failure as well as the associated social and financial costs. I do it every day, but I’m no tech bro and dislike the ‘AI will fix everything’ types as much as everyone here.

permalink
report
parent
reply