Does anyone else block AI tools like ChatGPT or Zoom notation extensions in their office network? Why or why not?
Does anyone else block AI tools like ChatGPT or Zoom notation extensions in their office network? Why or why not?
My team has been debating the risk involved with them but I've been on the fence. I saw this article (that's part ad for Asterisk) on it this morning and it got me thinking about it again.
We don’t block ChatGPT because our CEO is absolutely drinking the AI kool-aid. He believes that not using ChatGPT for everything is far riskier to the business than anything else in the world could be.
However, we do block all of the “AI” note-taking / transcription apps. The only exception is the one in Microsoft Teams that comes with the Premium license.
@whatwhatwhatwhat Our marketing team is absolutely chomping at the bit for it. I do think there's some safe and reasonable application of text generation, but I'm wary they know "safe" or "reasonable". I feel like Teams would be a better option than most considering how many compliance/regulation acronyms Microsoft has collected.
I REALLY do wish they'd stop trying to install everything AI related in the Chrome Web Store though...
I mean what are the cons of allowing chat GPT in your network? It could be a very useful tool to a lot of people. What are the risks of leaving chat GPT on your network?
@adhdplantdev The risk as I understand it is compromising confidentiality. Most AI apps, including ChatGPT, use your prompts in their training sets creating a risk for a random end-user pasting confidential information into a prompt. I might be able to argue that in certain compliance/regulation requirements it could be considered an accidental disclosure and require a notice be sent out.
I mean I suppose it comes down to how much you trust your users. I do think it's going to be very very difficult to block out all AI solutions especially since they are now open source AI GPT models. It's a good point about accidentally using confidential information in a prompt, or having the AI recommend code that may be under a toxic license. If you're a massive company there's probably a much higher risk than if you're in a smaller company. I suppose it depends also on your company culture. Either way if you try and block it I would expect there to be a fight to unblock it.
It’s blocked for us, because it’s a confidentiality and licensing liability.
Confidentiality because prompt data can be used for training / review outside the company.
Licensing because if it generates code and we use that code, it’s hypothetically possible that code is actually part of a codebase subject to a restrictive / “toxic” license that could get the company sued if not handled properly.