Calvin Yadav is CEO of IREX, an AI Platform for Smart Cities

Like many leaders in my industry, I am currently running an artificial intelligence company during a time when the ethical use of the tech is top of mind. I have spoken to a number of leaders who believe that in order to innovate — or, I should say, to reach the speed with which we want to innovate — we must first consider our responsibility to create technology that can be leveraged ethically.

You can’t simply say you have no responsibility for how the end-user uses your platform. One of my first priorities as CEO was to grab this lightning rod and lean into the social impact I believe my company creates. I made a point that I do not want to hide behind the “technicalities” of everyday life.

One of my favorite quotes that I feel sums it up well, famously attributed to Sun Tzu, is, “If you know the enemy and know yourself, you need not fear the result of a hundred battles.” Using the same analogy in the business of creating ethical AI, we must understand the data and how this data affects society. As leaders, we can design our business development strategy by understanding the social impact our AI products have.

Why is ethical AI important?

AI and machine learning are not buzzwords that should be used to simply tick a box for messaging purposes. In my company, “trustworthy” is part of our coding and the foundation of our products. Transparency is built into our platform, and we strive to demonstrate how the code is written and how it logs users. This way, no one can get away with a misuse of the platform.

This is because, like it or not, I believe AI is here to stay. As we are seeing in Europe, more standards for the tech are being established. As CEOs, CROs and strategists, we should look at these new standards as our new norms. Businesses and startups need to develop their growth strategies around these standards. This type of forward-thinking can help protect your AI platform from future unknowns. Startups in the U.S. should not run away from legislation but run toward it. From my perspective, new laws surrounding AI could help actually normalize and protect technological innovations.

How can you help set up your AI for success?

1. Do your research. The first step to creating ethical AI is making a long-term investment in your AI research. This will help ensure you understand the theoretical capabilities and limitations of your AI platform. Your company and AI software both need to be scalable, so make sure during the research and development phase that your product solves multiple problems — ethically.

2. Choose partnerships wisely. If you are raising money at the pre-revenue phase, make sure you pick partners that understand you are in this for the long haul. In my experience, new startup leaders often have some self-doubt and underestimate their abilities to figure things out. But remember that you have already taken a risk by starting your business and you know where you want it to go. So, when you are looking for a partner or an investor, make sure they are adding value and are aligned with your ethics from day one.

One piece of advice I received in my early days was that money has no value; people do. I was so surprised to hear that, but it is true. Cash flow can bring a level of comfort, but you did not become a leader because you like comfort. What you need in your early days is a dance partner that amplifies your sales, supports your values and creates new paths for which you did not originally plan.

3. Evaluate the human and social impact of your solution. When you put human and social impact in the front of your mind, you can create a solution that facilitates collaboration with other humans. And that’s what I think the role AI is meant to play: AI’s job is not to make decisions but to help humans make decisions faster. So, my advice is to ensure your AI keeps the person who will be using it in mind. This could help earn the trust of legislators and standard creators, too.

4. Empower your employees. Create an environment where employees feel empowered enough to propose changes to data sets and algorithms. One of my favorite questions that I ask my partners and employees is, “If I gave you $1 billion and you could make any change in the organization, what would you do and why?”

This is important because your employees come from different backgrounds and bring different inputs. Many also want to make sure their skills are making a real difference in our world. Your task as a leader is to provide those opportunities and make sure there are regular “pitstops” full of constructive feedback. This feedback loop can help you assess technology advancements and how an AI strategy might work in one region vs. another. In my company, this approach has also allowed me to create an environment that encourages our engineers to freely express their views on our AI modules and their impact on society.

5. Be transparent, and hold yourself accountable. I believe prioritizing social fairness in the outcomes your AI creates while providing transparency into the architect will help foster trust in our society and may even help speed up innovation and funding allocated. As leaders, we must remember that we cannot think about accountability after the design and coding; we have to create accountability through design and code.

Consider the documentary Coded Bias, for example, which discussed the bias in facial recognition algorithms. From my perspective, the code is only part of the equation. I believe companies also need to raise accountability and make their priority to spend funding on improving data sets so that AI does not perpetuate inequity. We as organizational leaders need to improve our data sets and create code for human-aware algorithms.

Forbes Business Council is the foremost growth and networking organization for business owners and leaders. Do I qualify?


Source link

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.