Judge Rules ICE Illegally Used ChatGPT for Use-of-Force Reports
A federal judge has ruled that U.S. Immigration and Customs Enforcement (ICE) improperly used OpenAI’s ChatGPT to generate official use-of-force reports, potentially violating the Administrative Procedure Act.
The ruling stems from a case where ICE relied on ChatGPT to draft a report detailing an incident involving a detained immigrant. The immigrant’s legal team argued that the agency failed to provide adequate notice and opportunity for public comment before adopting the AI tool for such critical documentation.
U.S. District Judge Jeffrey S. White agreed, stating that ICE’s use of ChatGPT to generate the report constituted a “rule” under the APA, and thus, the agency should have followed formal rulemaking procedures. This includes publishing the proposed rule and allowing for public feedback.
Why This Matters: AI in Government Operations
This decision highlights significant legal and ethical questions surrounding the integration of generative AI tools into government operations. While AI can offer efficiency, its use in sensitive areas like law enforcement and immigration requires careful oversight and adherence to established legal frameworks.
The ruling suggests that agencies cannot simply adopt AI tools for official functions without proper procedural safeguards. This could set a precedent for how other government bodies utilize AI in report generation and decision-making processes.
ICE has not yet commented on the judge’s ruling or its potential implications for its future use of AI technologies.

