AI: Ethics, costs, and the importance of seeing beyond the bottom line
- Aim Ltd

- 5 hours ago
- 3 min read

By Matthew Pitkin, Application Support Analyst
It goes without saying that the AI boom of the past several years has been a transformative experience for the tech industry and beyond, with everyone from startups to governments trying to find ways to make use of the new technology. And machine learning is a powerful technology; its ability to sort and see patterns in datasets (especially in data types that are historically challenging to process by conventional computational means) is noteworthy, and the rate at which it has advanced in a relatively short span of time has been impressive. Aim’s DataBelt® is a prime example of the way in which this technology can leverage this strength of AI to genuinely help users in the analysis of data.
But while the technology has progressed with great rapidity, wise or responsible use of it has in places lagged. Poorly thought-out application can be costly to both brand and customer trust, even if it provides short-term savings. As an example, last year Coca-Cola chose to use generative AI services to create their Christmas advert, instead of more conventional filmmaking techniques. Though the company did not disclose how much that advert cost them, it was completed in a matter of a few months (compared to the usual much lengthier turnaround their past adverts had taken), which would imply significant savings. But given that the public response to the advert, one that forms a major part of Coca-Cola’s annual holiday sales strategy, was primarily ridicule that declared it to be “soulless slop” and deriding its numerous inconsistencies, were those savings ultimately worth the reputational damage? It's especially questionable as a decision when one considers the sheer amount of money a giant like Coca-Cola has available; it would have been well within their means to afford a conventionally made advert, which would almost certainly have given them a more well received result.
It is not limited to just the business sphere; multiple police forces (including the London Metropolitan Police) have been solicited with the idea of “Predictive Policing”, tools to identify crime patterns and hotspots to enable them to pre-emptively dispatch police officers before crimes happen. On paper this sounds very efficient; a great way for administrators to show they are working toward government targets. But bias is a recurring problem in machine learning, and nearly all AI chatbots now come with disclaimers about the potential for providing incorrect information; when applied to law enforcement these can have far reaching consequences. Past data used to train these models may disproportionately focus on certain communities, and if officers are sent out with the presumption that a crime will be occurring at that locale based on an AI associating a community with crime, that in turn may lead to erroneous charges being levelled at citizens for matters of identity instead of actual wrongdoing. It may improve arrest rates, but is it worth the erosion of public trust and the increasing burden placed upon the courts?
Finally, we can see even among those who could be considered experts in the field a certain recklessness. OpenAI have multiple ongoing lawsuits with respect to ChatGPT induced mental health crises, in particular self-harm and suicide attempts. Links have been drawn to the continual drive to boost user engagement encouraging the use of models that, in engaging more strongly with users, have weaker safeguards for vulnerable people. OpenAI’s defence in one such case has been to simply hide behind their terms of service, absolving themselves of responsibility by saying that asking about such topics is a violation of the user agreement. Legally speaking, that may be a valid defence; morally however, it is dubious to push for more sycophantic models to drive engagement while knowing this can be dangerous for specific subsets of users, then point to the terms of service to absolve oneself of liability when things go wrong. Perhaps more importantly for the growth of OpenAI’s business, it’s poor optics for trying to make AI tools palatable to the general public.
None of these are problems unique to AI, of course; most new technologies have had overly enthusiastic adopters rushing forward in the name of progress without pausing to consider the cost of their haste, and many of those technologies are now long established, valued parts of our overall infrastructure. However, as this is not a new phenomenon at all, it behoves us to learn from the past and approach this new technology with responsibility to ourselves and our fellow man, rather than simply chasing the new, shiny thing at all costs.
For more information about our AI-driven tools and services, please contact us here.




Comments