Return to page


Three Keys to Ethical Artificial Intelligence in Your Organization


By Team | minute read | September 23, 2022

Blog decorative banner image

There’s certainly been no shortage of examples of AI gone bad over the past few years–enough to give everyone pause on how (and if) this technology can truly be used for good.  If it’s not Facebook selling data of its users , it’s self-driving cars from Uber that can’t recognize pedestrians in time to slow down or stop. 

So while the uses of artificial intelligence and deep learning have multitudes of amazing benefits for companies and people, increasingly the guardrails on how artificial intelligence and machine learning are used and deployed are going up in businesses both big and small to ensure that the technology is utilized in an ethical manner.  If your company is looking into AI apps or an AI platform for your products and services, as you get started, here are three keys to keeping in mind to ensure you’re practicing Ethical AI: 

Key #1:  Integrate your AI projects and tools into existing policies, processes, teams, and governance boards that your company employs today.   In many cases since AI and ML is a new technology with big consequences, companies and teams have tended to work in secrecy and outside of the corporate governance structure as these projects come to market.  This results in separate rules and expectations–one set for the AI team, and one for the rest of the business.   

Eliminate that perception–and the risk that your data and analytics team operates outside of our company standards by establishing a transparent adherence to existing oversight structures you already operate today.  Whether that’s at the board level or within your IT department, keep your AL and ML projects transparent, accountable, and aligned to your company standards.  The further away your machine learning team is from the rest of the company, the bigger risk you incur of something going wrong. 

Key #2:  Build organizational awareness and reward employees for identifying any ethical challenges in your artificial intelligence projects.   While the project may be top secret and confidential, the AI tools, your AI platform, and your training should not.  Make sure that all the knowledge of AI and ML in your company is not concentrated in just one or two people, but across a team or teams–including trusted advisors outside the company as well, even up to the board level.  The more transparent you are about the tools and platform you’re using, and the more people that are aware of how these tools work and their intended use in your organization, the more accountable your AI program will be. 

And, as your new processes and AI-enabled systems roll out, bring your employees into the mix as well.  By stating the aim and intention of integrating artificial intelligence into your business and empowering your employees to keep watch over any lapses, you increase the chances of an early warning system if something goes wrong–and you can take corrective action before it’s too late. 

Key #3:  Understand and monitor the impact of your AI project on your employees, customers, and the marketplace at large.   Just as the first inhabitants of Jurassic Park thought they could keep a lid on the Velociraptor population, so too do many of the cautionary tales (including those referenced at the beginning of this blog) start out with the best of intentions.  However, with time, hard lessons learned, and hopefully a dash or two of wisdom, we’ve come to understand that the law of unintended consequences applies to this technology more than many others that have come down the pike.  As such, even before the AI product or service is launched, it’s important to understand both the intended consequences as well as the potential pitfalls of the new process.  And that understanding needs to include not only those who are directly affected by the NLP-enabled process but the ecosystem and broader marketplace as well.   

Finding a new and easier way to service your customers sounds great.  But if it also impacts labor laws, suppliers, and vendors, and brushes up against regulatory guidance?  Then there may be an issue.  Importantly, this is a process.  While only so much can be done ahead of time, once the new process or product is live, the next phase of feedback and learning needs to kick in, and corrective action taken should the dinosaur eggs hatch (if you know what I mean). 

Like any powerful and new technology, there are huge risks and rewards to implementing AI and ML into your organization.  Take the lead and make sure your key stakeholders are engaged throughout the development, testing, and rollout phases of the project to ensure that you increase your odds of success while being a great corporate citizen and a company your employees are proud to work for. 

 headshot Team

At, democratizing AI isn’t just an idea. It’s a movement. And that means that it requires action. We started out as a group of like minded individuals in the open source community, collectively driven by the idea that there should be freedom around the creation and use of AI.

Today we have evolved into a global company built by people from a variety of different backgrounds and skill sets, all driven to be part of something greater than ourselves. Our partnerships now extend beyond the open-source community to include business customers, academia, and non-profit organizations.