About 6 months ago, I played a bit with large language models. It was interesting. The responses were like a middle school kid didn’t read the book, but tried to summarize the Cliff Notes.
The past couple days, there has a been a problem in another IT area affecting my ability to do my work. Super frustrating. But, every time there is an update, I am left feeling like I want to know more. So, I wrote a prompt based on the information in the latest status update into Copilot. I got back something that looks like a better, more understandable description than I normally get back from a human.
I wonder if perhaps because I am not a subject matter expert for the technologies involved, this is bullshit. But, it feels right enough.
And, I think that scares me.
Leave a Reply