Decision Intelligence For Dummies. Pamela Baker
Чтение книги онлайн.
Читать онлайн книгу Decision Intelligence For Dummies - Pamela Baker страница 18
Such biases in computing are insidious and not entirely new. For example, a computer algorithm used in 1988 to select medical school applicants for admission interviews discriminated against women and students with non-European names. Similarly, Amazon ended a recruiting algorithm in 2018 that proved to be biased against women.
AI can ease such problems or make them worse. In any case, reverting to human-only decision making obviously isn’t the answer.
Whether you use traditional data analytics, decision intelligence, or a combination of the two, you need to take steps to guard against accidental or intentional biases, errors, and reasoning flaws. Here are a few important steps to take to ensure fairness in machine decision-making:Be proactive: Use AI specifically to seek and measure discrimination and other known decision flaws throughout an entire decision-making process. Yes, this is using AI to make other AI and humans transparent and accountable.
Recognize the problem: Use algorithms to identify patterns, triggers, and pointers in actions, language, or rules that lead toward discrimination in order to set safeguards against discriminatory precursors in machine and human behaviors.
Check the outcomes: AI operates in sort of a black box, where humans can’t quite see what it’s doing. Yet AI cannot yet explain itself or its actions, either. But that’s okay — you can still check and grade its homework. For example, when checking for fairness in data-based or automated recruitment and hiring, look to see whether the outputs meet current legal standards such as the 80 percent rule — the rule stating that companies should be hiring protected groups at a rate that’s at least 80 percent of that of white men. Software developers should also perform disparate impact analyses — testing to see if neutral appearing functions have an adverse effect on a legally protected class — before any algorithm is used by anyone. If your software is from a third party, ask to see the results of the analysis and a detailed explanation of how the product works.
Do the math. Statistical analysis has been around for a long time. You can perform an old-fashioned and routine statistical test to reveal disparities arising from unintentional biases based on gender, race, religion, and other factors. Be sure, however, to automate the math rather than do it manually, because an automated process scales better, speeds results, and is likely more accurate.
Be sure to compare your outcomes with the reality of the environment. Context is everything. For example, a low number of female members in the Boy Scouts of America is not indicative of a bias against females but is rather a sign of an emerging diversity and inclusiveness (D&I) program taking root. Sometimes, the results from calculating disparities in a given situation are more revealing of the environment than of a bias in play. If this is the case, be transparent about both the environment and how the disparate impact analysis was done. You might also want to set alerts for any change in that environment that would warrant a new disparate impact analysis.
Seeing the Trouble Math Makes
Math is at the center of data science — which isn’t surprising, given that math is at the center of many areas, including music, computer programming, and the universe. However, though math may be the keystone of many things, it isn’t the whole thing of anything.
In fairness to math, it’s prudent to point out that math isn’t a decision, either. The discipline known as decision analysis defines a decision as an irrevocable act — that means an investment of effort, time, or resources must be fully committed and deployed before it can be technically deemed a decision. Math doesn’t commit, let alone act. It calculates. As such, it delivers a calculation, not a decision. The decision, my friend, rests with you, the diviner of the calculation.
It’s worrisome news, I know. It is so much more convenient to praise or blame the math for data driven decisions and, in so doing, absolve ourselves of any responsibility or accountability. But no, at best math gives us limited cover for bad decisions. See? I told you math makes for trouble!
The limits of math-only approaches
When you come right down to it, math isn’t much of a strategist. If your business strategy involves putting all your eggs in the mathematical basket, you’re staking your business’s future on a naïve strategy that more likely than not will underperform. That’s what typically happens when strategies depend too much on quantitative values.
It’s nonetheless true that quantitative values have fueled (and continue to fuel) the harvesting of big data low hanging fruit. That is to say that many of the algorithms that have been used until this point do have value and will continue to have value going forward — but only in certain circumstances. For example, an algorithm that predicts when a mechanical part will reach the end of its usefulness is a reliable indicator of when that part should be replaced. Such decision triggers — or decision recommendations, if you prefer a different term — will continue to be helpful. That being said, without the added qualitative inputs for balance and context in decision-making, pure math tends to go a bit sideways in real-world applications.
So, what could act as qualitative measures in decision-making? For the most part, they are things you associate with human qualities, such as behaviors, emotional responses, talents, instincts, intuitive intelligence, experience, cultural interpretations, and creativity. Folks often refer to them as soft skills, but Google’s Cassie Kozyrkov hits the nail on the head when she says that it’s better to think of these skills as “the ‘hardest to automate.’”
I’m all for cultivating the soft skills, as my arguments throughout this book make clear. But I’m not about to throw the baby out with the bathwater. The points I make here in no way negate or contradict the usefulness of math in data science, data analytics, or decision processes. You can’t just skip the math in decision intelligence — nor should you want to. The good news is that much of the math you need has already been built into many of the more useful analytical tools available to you, making them much less troublesome and far easier to use. (I tell you more about tools with automated math later, in Chapter 7.) For now, the point is that math alone does not a decision make.
Decision intelligence adds to the data sciences; it doesn’t lessen the value of the associated disciplines, experiences, tools, or lessons learned thus far in scalable decision-making. Rather, it involves a rethinking of how and when to use those disciplines, experiences, tools, and lessons learned thus far in the decision-making process. Make no mistake; math and algorithms remain important cornerstones in many of the tools. However, math and algorithms are decoupled from the decision-making process in the user interface and pushed to the background in emerging decision intelligence and related tools.