Young female doctor working with computer in modern clinic

This is the first in a series covering just a small portion of how the MedicoLegal and Expert Opinion industry in Canada is evolving with the arrival of generative AI like ChatGPT, we consider how AI is making reporting easier and more secure using GhatGPT and GPT4. Today, it’s never been easier for an expert to generate a report, and it’s never been easier to be caught using templates, other peoples work, and inconsistent opinions.

Our industry is constantly evolving and becoming more sophisticated, and one of the most recent developments is the use of AI language models like GPT3 and GPT4. These models are not only making it easier to generate reports, but they are also improving the security of the reports and the sanctity of justice by making it easier for judges to verify if the reports were copied directly from other notes or are copies of other reports on file in courthouse records.

GPT3 and GPT4 are both language models that use deep learning algorithms to generate human-like text. These models have been trained on vast amounts of data and can produce expert reports with minimal input from human writers. To give you an idea of how far this technology has come, this blog post is a great example of generative AI in action! 

It won’t be long before experts can use generative AI to generate medico legal expert reports faster, more clearly, and potentially with more accuracy than ever.

However, the use of AI language models like GPT3 and GPT4 has another important benefit for the industry: running these documents through verification programs can make it easier to detect instances where an expert has not written their own report. These models can produce text that is similar in style and vocabulary to the expert’s own writing, however as of now, the reports will still have small, detectable differences.

For example, AI language models like GPT3 and GPT4 can be trained to detect patterns in the way that a particular expert writes, such as their use of specific words or phrases. When generating reports, the model can then compare the expert’s writing to the generated text and highlight any areas where the two differ significantly. If scrutinized, this will make it much harder for an expert to pass off work that they did not write themselves, and makes it easier for a judge or an opposing advocate to scrutinize both the substance and the style of a report.  It will open a separate debate around if this is acceptable as an expert testimony or report, as it may be considered ghost-writing by some judges, but acceptable by others if the expert who submitted the report will testify that it’s contents reflect their beliefs. 

The use of AI language models like GPT3 and GPT4 is a promising battleground for debates around the above, but in the end, it’s a tool that has the potential to help the wheels of justice turn more smoothly, and help keep the blind scales of justice as even as possible.

 

References:

Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., … & Amodei, D. (2020). Language models are few-shot learners. arXiv preprint arXiv:2005.14165.
Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., & Sutskever, I. (2019). Language models are unsupervised multitask learners. OpenAI blog, 1(8), 9.
Lample, G., Sablayrolles, A., & Ranzato, M. A. (2019). Unsupervised machine translation using monolingual corpora only. arXiv preprint arXiv:1711.00043.
Khandelwal, U., He, H., Qi, P., & Jurafsky, D. (2018). Sharp nearby, fuzzy far away: How neural language models use context. arXiv preprint arXiv:1805.04623.

>