"Generative AI has the capacity to disrupt and redefine the professional landscape, but it is clear from our findings that there is a trust gap with professionals," said Steve Hasker, President and CEO, Thomson Reuters. "The future of professional work is set to be revolutionized by generative AI, and as an industry, we need to work together to find the right balance between the benefits of technology and any unintended consequences. We believe this will help our customers to first trust the transformative power of generative AI, and then harness the opportunity to shape the future of their professions."
Amongst the professionals surveyed, the potential for generative AI is undeniably recognized; 78% of respondents believe generative AI tools such as ChatGPT can enhance legal or accounting work, with the proportion slightly higher for legal (82%) than for tax (73%). About half (52%) of all respondents believe generative AI should be used for legal and tax work.
However, despite the research sharing strong feelings about generative AI’s potential utility, many within the legal and tax fields are still weighing their options before adopting the technology. Only 4% of respondents are currently using generative AI in their operations, with an additional 5% planning to do so. Interestingly, tax and accounting firms are more open to the idea, with a 15% adoption or planned adoption rate.
Among those who have adopted or are planning to adopt generative AI technologies, research was the primary use case cited by respondents; about two-thirds of those in corporate legal and 80% of those in tax identified it as the most compelling use. Knowledge management, back-office functions, and question answering services were also cited as use cases of interest.
Risk perception seems to be the major stumbling block in the adoption of generative AI tools. A significant 69% of respondents expressed risk concerns, suggesting that fear may hold back a more widespread adoption. While the potential of generative AI tools is recognized, there is an air of uncertainty, underlining the need for establishing trust, as well as furthering education and strategic planning in its implementation.
Despite concerns around the risks to privacy, security, and accuracy, very few organizations are actively taking steps to limit the use of generative AI or ChatGPT among employees. Twenty percent of respondents said their firm or company has warned employees against the unauthorized use of generative AI at work. Only 9% of all respondents, meanwhile, reported their organization had banned the unauthorized use of generative AI.
The Thomson Reuters Institute conducted three separate online surveys for legal and tax professionals in the United States, United Kingdom, and Canada. The respondents were invited to take the survey via an online invitation, or as part of the Thomson Reuters Influencer Coalition panel.
- First survey: Aimed at mid-size and large law firms, ran between March 21-31, 2023, and received 443 applicable respondents.
- Second survey: Aimed at corporate legal departments, ran between April 11-25, 2023, and received 587 applicable respondents.
- Third survey: Aimed at tax and accounting firms and corporate tax departments, ran between May 3-15, 2023, and received 771 total applicable respondents.
Most respondents from law firms and tax firms were from mid-sized firms, representing 62% of law firm respondents and 55% of tax firm respondents. For corporate legal and tax, most respondents were from small or midsize departments: 88% of corporate legal respondents and 87% of corporate tax respondents were from departments of 50 people or fewer.
Those respondents completing the survey were also asked selected open-ended questions concerning their opinions on why generative AI should not be used for legal or tax work, as well as the potential risks of generative AI, and if they believed those risks existed. The Thomson Reuters Institute also conducted additional qualitative interviews to further expand generative AI beliefs, in addition to the survey responses.