The Risks of Using ChatGPT at Work

02 Jun
Image

Looks Can Be Deceiving

ChatGPT has lately been getting a lot of attention for its perceived ability to streamline work by being able to complete simple/mundane tasks. Workers have been using ChatGPT to help analyze reports, write communications such as blog posts, and even help identify bugs in code. However, these workers fail to realize the serious security concerns associated with ChatGPT, and miss the entire point of how the platform is supposed to be used. That is why Case Medical does not use ChatGPT in any of its applications and here's why:

Major Security Concerns

ChatGPT is an open, machine learning platform. Meaning every piece of information entered into it is used to train its algorithm to make it better. The information is also readily available to users of the platform. This poses a serious risk for workers who input sensitive/proprietary information into it, or who attempt to use the system to help them in their jobs. Samsung recently announced that some of its proprietary code was leaked by ChatGPT when a worker entered it into the system to try and find a bug in the code. Now that code is part of the ChatGPT system, freely accessible to anyone in the world, and there’s nothing Samsung can do about it. In another example, an attorney from NYC is now under sanctions from the state bar association after he used six fake legal citations from ChatGPT in a case he was working on. Workers who use ChatGPT not only risk exposing sensitive/proprietary information to the public, they also risk getting inaccurate information being returned to them.
Image
Image

Legal Issues Surrounding ChatGPT

There are also several serious legal issues surrounding ChatGPT and its use. Referencing the Samsung leak, users of ChatGPT who are generated solutions that make use of that leaked code could be vulnerable to legal action from Samsung. Furthermore, any protected/copyrighted data that is used by ChatGPT in general, makes the users as well as Open AI (the makers of ChatGPT) vulnerable to lawsuits. There are lawsuits being filed right now on behalf of artists as elements of their work have been found in pictures, communications, and even songs generated by ChatGPT. These lawsuits aim to seek damages from both Open AI and the users who generated the content. User/personal data is another issue with platform as there are no protections to protect personal data on the platform. This caused the country of Italy to ban ChatGPT in its entirety.

Keep Everything Internal, and Safe

Workers should not be using ChatGPT to assist them with their jobs. The risks are too great, and they miss the purpose of ChatGPT in the first place. ChatGPT is a research tool, closer to a novelty than a legitimate tool to assist with work. It is meant to experiment with AI and to push the limits of the technology. It is not meant to be a safe, or reliable source of information. At Case Medical, we do not use ChatGPT and we keep everything we do in house. Our software, CaseTrak360, is entirely self-contained on our servers, and does not interact with the outside world. Everything, from the internal chat, to the video conferencing system, to the reports and data analysis, is self contained within the software. Case Medical is also ISO 27001 certified, meaning we meet the highest standards for data security. So you can be sure that CaseTrak360 is a safe and reliable tool that can help you manage your SPD and OR. Schedule a demo with us today by reching out to info@casemed.com.
Image

Search

Subscribe To Our Newsletter
Select your category:
Copyright © 2015-2020 Case Medical.
All Rights Reserved.