Tag: Artificial Intelligence

Nobody gets fired for following the algorithm . . .

Nobody gets fired for following the algorithm . . .

A recent study published in the Proceedings of the National Academy of Sciences examines “quantification fixation,” a cognitive bias where individuals prioritize numerical data over qualitative information in decision-making. Through a series of experiments, the researchers found that people tend to favor options with quantifiable attributes, even when qualitative factors are equally or more important. This bias can lead to suboptimal choices, as decision-makers might overlook critical qualitative aspects. The study highlights the need for awareness of this bias to improve decision-making processes. #AI #datadrivendecisionmaking

AI in the news: Intelligence Agencies Face Challenges with AI and Developments in AI Ethics

This week’s The Economist reports on the challenges facing the use of AI in intelligence agencies. One of the issues cited in the article, “Spy Agencies Have High Hopes for AI”, is the dearth of adequate sets of information that can be ingested. AI depends on massive sets of data to be effective and in a number of intelligence settings there is limited available data.

Fast Company published two articles in its March / April 2021 issue on the subject of AI ethics, “AI Has a Big Tech Problem”, and “AI Ethics: Taking Stock and the Way Forward”. (Sorry the online links aren’t available – please contact us if you’d like copies of these articles) The overriding message in these articles is that the thought leadership in AI is concentrated in a limited number of firms and institutions. The risk reported is that ethical decisions made by this small cohort may be influenced, however subtly, by other demands – and not necessarily in the best interest of society as a whole. Something to consider.

Automation Déjà Vu?

One facet of digital transformation that is currently under intense discussion is the utility of citizen development technologies such as low-code, no-code, RPA, etc. The argument against using these technologies is that these tools and products lack the industrial strength of other technologies. The best way to automate is not to use these easy-to-use technologies or at best use them in a transitional fashion. We find this argument reminiscent of the early days when advocates of low level languages, like assembler, dismissed the high level languages, like COBOL, as inefficient and not suited for high volume work. All of us from that era knew how that worked out – improved compute performance and the ease-of-use of high level languages led to the first digital transformation across industries. The focus of the current arguments are misplaced. Firms should employ technologies that are fit for purpose. Citizen developed solutions may be the perfect solutions in a variety of situations. The important decision has always been to determine the needs and map those needs against the near-term and long-term use of any technology. If it works today, but does not scale – and scale is a requirement, that is a problem. If the solution adds unacceptable complexity or severe maintenance risks, that is also a problem. If the technology will not deliver results to market-based timelines, you may not be using the correct technology solution. If the technology does not have the required flexibility, that is a problem. This all boils down to making informed decisions when deploying technology. There’s nothing new here. It’s also important to note that in some cases not using technology at all is the right course of action!