It occurs to me that selection of tasks to delegate to "AI" (read: LLMs) is a better test of whether the operator is an expert or not, as opposed to how good the model is.
As thought fodder, I offer the examples of many supposedly high-level attorneys, law firms, and (in a recent Georgia state case, to name but one) even judges either failing to review or foolishly accepting, and then filing, hallucinated crappola.
One might consider White & Case lawyers to be experts; same with most attorneys practicing in federal courts, and presiding judges. Their real-world delegation of fiduciary duties to AI doesn't prove the quality of the AI, but rather their own actual lack of expertise — both in the use of LLMs and also, apparently, in knowledge of the law, basic legal research, and essential business skills such as substantive proofreading.
treetalker•2h ago
As thought fodder, I offer the examples of many supposedly high-level attorneys, law firms, and (in a recent Georgia state case, to name but one) even judges either failing to review or foolishly accepting, and then filing, hallucinated crappola.
One might consider White & Case lawyers to be experts; same with most attorneys practicing in federal courts, and presiding judges. Their real-world delegation of fiduciary duties to AI doesn't prove the quality of the AI, but rather their own actual lack of expertise — both in the use of LLMs and also, apparently, in knowledge of the law, basic legal research, and essential business skills such as substantive proofreading.