AI Defi Blog

Welcome to our blog! Here, we bring you the latest and greatest in the world of virtual currencies. Whether you're a seasoned pro or just getting started, we've got you covered. Our goal is to deliver you with informative and useful content to help you navigate the dynamic world of virtual currencies. So sit back, grab a cup of coffee, and let's jump into the exciting world of crypto as a team!

January 7, 2023 8:20 AM

Robot wearing dunce hat sits with head in hand in futuristic circuit backdrop

Image Credit: Donald Iain Smith/Getty

Check out all the on-demand sessions from the Intelligent Security Summit here .

At their best, AI systems extend and augment the work we do, helping us to realize our goals. At their worst, they undermine them. We’ve all heard of high-profile instances of AI bias, like Amazon’s machine learning (ML) recruitment engine that discriminated against women or the racist results from Google Vision. These cases don’t just harm individuals; they work against their creators’ original intentions. Quite rightly, these examples attracted public outcry and, as a result, shaped perceptions of AI bias into something that is categorically bad and that we need to eliminate.

While most people agree on the need to build high-trust, fair AI systems, taking all bias out of AI is unrealistic. In fact, as the new wave of ML models go beyond the deterministic, they’re actively being designed with some level of subjectivity built in. Today’s most sophisticated systems are synthesizing inputs, contextualizing content and interpreting results. Rather than trying to eliminate bias entirely, organizations should seek to understand and measure subjectivity better.

In support of subjectivity

As ML systems get more sophisticated — and our goals for them become more ambitious — organizations overtly require them to be subjective, albeit in a manner that aligns with the project’s intent and overall objectives.

We see this clearly in the field of conversational AI, for instance. Speech-to-text systems capable of transcribing a video or call are now mainstream. By comparison, the emerging wave of solutions not only report speech, but also interpret and summarize it. So, rather than a straightforward transcript, these systems work alongside humans to extend how they already work, for example, by summarizing a meeting, then creating a list of actions arising from it.


Intelligent Security Summit On-Demand

Learn the critical role of AI ML in cybersecurity and industry specific case studies. Watch on-demand sessions today.

Watch Here

In these examples, as in many more AI use cases, the system is required to understand context and interpret what is important and what can be ignored. In other words, we’re building AI systems to act like humans, and subjectivity is an integral part of the package.

The business of bias

Even the technological leap that has taken us from speech-to-text to conversational intelligence in just a few years is small compared to the future potential for this branch of AI.

Consider this: Meaning in conversation is, for the most part, conveyed through non-verbal cues and tone, according to Professor Albert Mehrabian in his seminal work, Silent Messages. Less than ten percent is down to the words themselves. Yet, the vast majority of conversation intelligence solutions rely heavily on interpreting text, largely ignoring (for now) the contextual cues.

As these intelligence systems begin to interpret what we might call the metadata of human conversation. That is, tone, pauses, context, facial expressions and so on, bias — or intentional, guided subjectivity — is not only a requirement, it is the value proposition.

Conversation intelligence is just one of many such machine learning fields. Some of the most interesting and potentially profitable applications of AI center not around faithfully reproducing what already exists, but rather interpreting it.

With the first wave of AI systems some 30 years ago, bias was understandably seen as bad because they were deterministic models intended to be fast, accurate — and neutral. However, we are at a point with AI where we require subjectivity because the systems can match and indeed mimic what humans do. In short, we need to update our expectations of AI in line with how it has changed over the course of one generation.

Rooting out bad bias

As AI adoption increases and these models influence decision-making and processes in everyday life, the issue of accountability becomes key.

When an ML flaw becomes apparent, it is easy to blame the algorithm or the dataset. Even a casual glance at the output from the ML research community highlights how dependent projects are on easily accessible ‘plug and play’ upstream libraries, protocols and datasets.

However, problematic data sources are not the only potential vulnerability. Undesirable bias can just as easily creep into the way we test and measure models. ML models are, after all, built by humans. We choose the data we feed them, how we validate the initial findings and how we go on to use the results. Skewed results that reflect unwanted and unintentional biases can be mitigated to some extent by having diverse teams and a collaborative work culture in which team members freely share their ideas and inputs.

Accountability in AI

Building better bias starts with building more diverse AI/ML teams. Research consistently demonstrates that more diverse teams lead to increased performance and profitability, yet change has been maddeningly slow. This is particularly true in AI.

While we should continue to push for culture change, this is just one aspect of the bias debate. Regulations governing the AI system bias are another important route to creating trustworthy models.

Companies should expect much closer scrutiny of their AI algorithms. In the U.S., the Algorithmic Fairness Act was introduced in 2020 with the aim of protecting the interests of citizens from harm that unfair AI systems can cause. Similarly, the EU’s proposed AI regulation will ban the use of AI in certain circumstances and heavily regulate its use in “high risk” situations. And beginning in New York City in January 2023, companies will be required to perform AI audits that evaluate race and gender biases. 

Building AI systems we can trust

When organizations look at re-evaluating an AI system, rooting out undesirable biases or building a new model, they, of course, need to think carefully about the algorithm itself and the data sets it is being fed. But they must go further to ensure that unintended consequences do not creep in at later stages, such as test and measurement, results interpretation, or, just as importantly, at the point where employees are trained in using it.

As the field of AI gets increasingly regulated, companies need to be far more transparent in how they apply algorithms to their business operations. On the one hand, they will need a robust framework that acknowledges, understands and governs both implicit and explicit biases.

However, they are unlikely to achieve their bias-related objectives without culture change. Not only do AI teams urgently need to become more diverse, at the same time the conversation around bias needs to expand to keep up with the emerging generation of AI systems. As AI machines are increasingly built to augment what we are capable of by contextualizing content and inferring meaning, governments, organizations and citizens alike will need to be able to measure all the biases to which our systems are subject.

Surbhi Rathore is the CEO and cofounder of


Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers

AI DeFi Blog is a top resource for all things related to DeFi and digital assets. Our team of experts is dedicated to providing our readers with the most recent news, insights, and analysis on the dynamic world of DeFi. At AI DeFi Blog, we are passionate about all things DeFi, from margin trading to yield farming and beyond. We believe that DeFi has the potential to change the way we think about finance and financial systems, and we are excited to be a part of this developing movement. One of the main features of DeFi is that it is built on distributed ledger technology, which allows for peer-to-peer transactions that do not require a third party, such as a financial institution, to facilitate. This means that you can have control of your own financial transactions and assets, which can be especially appealing to those who are skeptical of traditional financial systems. DeFi also allows for greater accessibility and inclusion, as it enables anyone with an internet connection to participate in financial transactions and activities. This is particularly important in countries where traditional financial systems may be less developed or inaccessible. In addition to DeFi, we also cover a wide range of topics related to cryptocurrency, including bitcoin, altcoins, mining, and more. We understand that the world of cryptocurrency can be daunting, especially for those who are new to the space. That's why we strive to provide our readers with clear and concise and easy-to-understand content that covers the most important aspects of cryptocurrency and DeFi. Whether you're a experienced pro or just starting out, we've got something for you. Our goal is to deliver our readers with the knowledge and tools they need to navigate the exhilarating world of DeFi and cryptocurrency. So join us as we explore the exhilarating world of DeFi and cryptocurrency together! From margin trading to yield farming and beyond, we've got you covered.