The ChatGPT craze sweeps through the American workplace, sounding the alarm!


ChatGPT offers immense convenience to users, but some companies worry about potential risks in knowledge and strategy.

A Reuters/Ipsos poll shows that despite companies like Microsoft and Google restricting the use of ChatGPT, many American employees continue to use ChatGPT for basic work tasks.

People can use ChatGPT to assist with daily tasks such as writing texts, extracting information, and conducting preliminary research, which helps improve work efficiency. While ChatGPT brings great convenience to users, some companies worry about potential risks in knowledge and strategy.

The online artificial intelligence survey was conducted between July 11 and 17, with 28% of respondents saying they frequently use ChatGPT at work, while only 22% said their employers explicitly allow the use of this external tool.

Reuters, in collaboration with Ipsos, surveyed 2,625 adults, with 10% of respondents stating that their bosses explicitly prohibit the use of external artificial intelligence tools, and about 25% are unsure whether their companies allow the use of this technology.

ChatGPT is a chatbot that, since its launch in November last year, has quickly become the fastest-growing application in history. ChatGPT's developer, OpenAI, has had conflicts with regulatory authorities. One point of contention is OpenAI's large-scale data collection, which has been criticized by privacy regulators, raising concerns about personal privacy and data protection.

AI auditors from different companies may access conversations generated by ChatGPT. Researchers have found that AI of this kind can reproduce data it collected during training, regenerating content similar to the original conversations. This highlights the importance of considering data privacy and protection when using these AI tools.

Ben King, Vice President of corporate security company Okta, says users might lack a deep understanding of their data collection methods when using generative AI services, especially because users do not sign a contract with many AI services, particularly those offered for free. Companies need to pay more attention to data privacy and security issues to ensure that their data is properly protected and managed during use.

OpenAI declined to comment on the potential impacts of individual employees using ChatGPT. In a recent article, OpenAI stated that it respects the intended use of data from its partners and will not use these data to further train chatbots without authorization.

When using Google's Bard, different types of user data might be collected. The company allows users to delete their history from their accounts, which also deletes the content input into the AI, but Google has not provided further details.

U.S. employees of Tinder say, despite the company not officially permitting the use of ChatGPT, some employees still use it to handle daily emails and other tasks. Even if the use of this technology is not allowed by the company policies, employees are using it for tasks such as creating entertaining calendar invitations and farewell emails upon resignation, and possibly for general research. They use a generic way that does not disclose internal company information, to comply with company rules while protecting their privacy.

Reuters could not verify specific instances of Tinder employees using ChatGPT, but Tinder stated it regularly provides guidance to its employees to ensure the safe use of their data.

In May this year, Samsung Electronics discovered employees uploading sensitive codes to platforms and prohibited employees worldwide from using ChatGPT and similar AI tools. On August 3, Samsung stated it is taking review measures to create a secure environment for the use of generative AI, which will help improve productivity and efficiency of employees. Until these measures are fully prepared, its use on company devices is temporarily prohibited.

In June this year, Reuters reported that Alphabet is promoting Bard chatbot globally while reminding employees around the world about how to use the Bard chatbot.

Google says, though Bard may offer unwelcome code suggestions, this intelligent tool has practical significance in enhancing programmers' work efficiency. The company also stated it would commit to transparently disclosing the limitations of Bard's technology.

Risk Warning and Disclaimer

The market carries risks, and investment should be cautious. This article does not constitute personal investment advice and has not taken into account individual users' specific investment goals, financial situations, or needs. Users should consider whether any opinions, viewpoints, or conclusions in this article are suitable for their particular circumstances. Investing based on this is at one's own responsibility.

The End


Benefit Cost Ratio

The Benefit-Cost Ratio (BCR) is a financial metric used to assess the relationship between the economic benefits and costs of a project or investment. It compares the anticipated benefits of a project to its costs, determining its feasibility and return level.

Related News

Risk Warning

TraderKnows is a financial media platform, with information displayed coming from public networks or uploaded by users. TraderKnows does not endorse any trading platform or variety. We bear no responsibility for any trading disputes or losses arising from the use of this information. Please be aware that displayed information may be delayed, and users should independently verify it to ensure its accuracy.


Contact Us

Social Media