The Council of Bars and Law Societies of Europe (CCBE) has adopted a technical guide for lawyers on the use of GenAI.
It is addressed to lawyers and bars/law societies (which, in most countries, also regulate lawyers).
It is a clear and deliberate step to educate legal professionals on more technical matters that were previously rarely a focus of their training.
The purpose of the paper is to better acquaint lawyers with their options, including the possibility of using local models and understanding the trade-offs.
CCBE was created by the bars/law societies as a representative body of the profession in Brussels, to whom the EU institutions can turn.
I believe this is an important step in toward better dissemination of the use of AI models.
What makes it notable is the level of technical detail that will hopefully help lawyers formulate what they need - comparing the options of on-premises, colocation, IaaS or fully managed/API), and giving indicative hardware budgets (dated September 2025) ranging from ~€2,000 to €20,000 for local inference boxes. It also discusses quantisation (FP16 vs INT8 vs INT4), context-length trade-offs, and tokens-per-second thresholds for interactive use.
Curious to hear your views on whether there is any point in going in this direction. Or if you believe anything important is missing or incorrect.
District5524•2h ago
It is addressed to lawyers and bars/law societies (which, in most countries, also regulate lawyers).
It is a clear and deliberate step to educate legal professionals on more technical matters that were previously rarely a focus of their training.
The purpose of the paper is to better acquaint lawyers with their options, including the possibility of using local models and understanding the trade-offs.
CCBE was created by the bars/law societies as a representative body of the profession in Brussels, to whom the EU institutions can turn.
I believe this is an important step in toward better dissemination of the use of AI models. What makes it notable is the level of technical detail that will hopefully help lawyers formulate what they need - comparing the options of on-premises, colocation, IaaS or fully managed/API), and giving indicative hardware budgets (dated September 2025) ranging from ~€2,000 to €20,000 for local inference boxes. It also discusses quantisation (FP16 vs INT8 vs INT4), context-length trade-offs, and tokens-per-second thresholds for interactive use.
Curious to hear your views on whether there is any point in going in this direction. Or if you believe anything important is missing or incorrect.