Yeah who cares about actually reading and properly understanding anything at all. Given the policy world is filled with so much BS, no wonder they like a BS machine.
Building experience on how to use tools to automated expected drudgery like making PPT slides or wordsmithing an NSC memo is a good skill to build.
There is a lot of low hanging fruit in professional tooling that can and should be automated where possible, and some class similar to the "Missing Semester" at MIT except oriented towards productivity tools would be helpful.
Which is why, when someone sends me an AI-generated message that previously would've been written by them, it's like they're jamming one of my skills.
Not only are they not giving me some information I had before (e.g., that the person thought of this aspect to mention, that they expressed it this way, that they invested this effort into this message to me, etc.), but, (if I don't know it's AI-generated) the message is giving me wrong information about all those things I read into it.
(I'm reasonably OK at reading between the lines, for someone with only basic schooling. Though sometimes I'm reminded that some of my humanities major friends are obviously much better at interpreting and expressing. Maybe they're going to be taking the AI-slop apocalypse even worse than I do.)
Two domains which are rife with hype, and self-serving self-nominated experts, and are both put to use for manipulating the public for questionable purposes.
Seems typical for humans; centuries of false belief religion was accurate, now contemporary nation state politics, economics, and the engineered things they sell for profit.
So long as enough stuff is available on shelves to keep people sedate, they'll believe whatever. Our biology couples us to knowing when we need food, water; keep those normal and no one cares about anything else. Riots only occur when biology is threatened. Everything else about humanity is 100% made up false belief, appeals to empty trust in what we say.
Physics makes it pretty clear its all just skins suits pulling illusions out their ass all the way down. We can never change the immutable forces of physics, there's too much other stuff in universe rushing in to correct. This is it for humans; idle about on Earth hallucinating.
I think, now more than ever, we need to clearly distinguish reproducible science from untested hypothesise. Reality vs Opinion.
update: opinion is not quite the right word here. Perhaps somebody else can think of a better word.
> Claude was also able to create a list of leaders with the Department of Energy Title17 credit programs, Exim DFC, and other federal credit programs that the team should interview. In addition, it created a list of leaders within Congressional Budget Office and the Office of Management and Budget that would be able to provide insights. See the demo here:
and then there is a video of them "doing" this. But the video basically has Claude just responding saying "I'm sorry I can't do that, please look at their website/etc".
Am I missing something here?
> The team came up with a use case the teaching team hadn’t thought of – using AI to critique the team’s own hypotheses. The AI not only gave them criticism but supported it with links from published scholars. See the demo here:
But the video just shows Claude giving some criticism but then just says go look at some journals and talk to experts (doesn't give any references or specifics).
Why would LLMs help, unless trained on classified information for which you could also use an internal search engine? In the end it comes down to how much military, economic and propaganda power you have and how much you are willing to deploy it.
The whole interaction with LLMs, which focuses on clicking, wrestling with a stupid and recalcitrant dialogue partner distracts from thinking. Better read original information yourself and take a long walk to organize it in your own mind.
troelsSteegin•1d ago
mapt•1d ago