top of page
Search

Machines Still Work For Us: Understanding vs Enlightenment


Machines Still Work For us: Understanding vs Enlightenment


I was talking with a friend recently about the gap between expectations and reality. We agreed that this gap is the source of much disappointment. Trouble, even.


So how do we close it?


Whether in business or life, understanding what happened and why, is helpful.


Machine Learning, an application of Artificial Intelligence, calls this Causal Inference.

Such as hearing piano music and concluding that someone is playing piano nearby – but to identify the true cause of the music, one must investigate further. After all, it could be a recording, a harpsichord, or an auditory hallucination.


But how well can computers really understand?


Harvard University employs a “teaching for understanding” model that encourages students to: think, analyze, problem solve, and make meaning of what they have learned.


Why? In finding meaning, we find purpose, discover our values, gain a sense of control over events in our life, and find joy.


In the study, titled “Towards Causal Representation Learning,” the researchers explain that machine learning struggles with causality because it relies heavily on large, predefined sets of data. What if the data sets aren’t there?


Causal AI can do automated analysis by tracing and explaining exactly what's happening at every step based on specific contextual data. However, there are fundamental problems with causal inference:


  1. Most machine learning-based solutions focus on predicting outcomes, not understanding causality.

  2. In many cases, it is not possible to observe the outcome of a scenario that did not actually occur. This is referred to as the counterfactual scenario.

  3. Correlation-based AI is probabilistic and requires humans to verify the accuracy of results.


Human brains operate through about 86 billion interconnected neurons that work together through electrical and chemical signals passed from one end of a cell to another.


If it sounds complex, it is.


Under the umbrella of AI, Machine Learning has an approach called “deep learning,” inspired by the human brain. It features neural networks constructed from multiple layers of software nodes that work together and teach computers to process unstructured data and extract features, while removing some of the dependency on human experts.


But like causal inference –there are limits to the way deep learning models can interpret data to generate a 360 understanding:


  1. The more accurate and powerful models need more parameters, which calls for more data.

  2. Deep learning models are rigid and incapable of multitasking after they have been trained (relatable!)

  3. Slow programs, data overload, and excessive requirements usually take a lot of time to provide accurate results. And it requires constant monitoring and maintenance to deliver the best output.


Further, try as they might, machines certainly cannot think critically, make unique connections, exercise judgment, and adapt to new situations (try transforming intuition or emotion into a data point!).


So, can understanding bloom in the artificial glow of a computer screen?


Maybe. But enlightenment is human.


Besides, it’s what we do with our newfound understanding that matters. The other part of DaVinci’s quote is, “Knowing is not enough; we must apply. Being willing is not enough; we must do.”


Comments


bottom of page