Page 30 - 2016Fall
P. 30
Smart People Behaving Foolishly
Figure 3. Inverse problems are very difficult because they are often ill posed and/or ill conditioned. Imagine inserting hamburger into a meat grinder, turning the grinder backward, and expecting to obtain a cow at the output.
I have seen a frightening number of projects in which people have not recognized the ill-posed and ill-conditioned nature of the inverse problems they were studying. Many of the project results have been useless or worse. Typically, people like to divide the DFTs of the signals. This is a very bad idea (Candy et al., 1986). The approach to dealing with ill-posed and ill-conditioned problems is generally called “regular- ization.” This involves defining an associated well-posed problem, the solution of which is well behaved and offers a reasonable approximation to the solution of the ill-posed problem (Candy et al., 1986).
Statistical Detection/Classification/ Target Recognition Issues
The technical area in which I have witnessed the largest number and most serious cases of algorithm abuse is in detection/classification/target recognition problems. Many groups of people embark on such problems without hav- ing studied detection/classification theory (Van Trees, 1968; Duda et al., 2001). The hubris is startling and the results are disastrous. Nonetheless, the problem is ubiquitous.
Poor Experiment Design
When using a supervised classifier, the training and testing data sets must be separate and representative of each other. Most of all, the classifier must not be tested using the same data set on which it was trained (Duda et al., 2001; Narins and Clark, 2016). Another important point is that the clas- sification experiment must include the measurement of both
target detections and false alarms. Amazingly, I have seen countless costly projects in which only the detections were measured, with no regard to false alarms. This ensures that classification performance cannot be measured properly, as described next.
Erroneous Methods of Measuring System Performance
The most egregious and most common error is that of using the quantity probability of detection PD as the performance index for a classifier. Bayesian hypothesis testing theory tells us that both PD and probability of false alarm PFA (or their complements) are required to specify detection perfor- mance (Van Trees, 1968). The user must make the trade-off between the two by choosing a decision threshold based on a receiver operating characteristic (ROC) curve. Nonetheless, I have witnessed countless people who computed only the PD and reported the results as if they were meaningful (e.g., in PhD theses, project reports, and program reviews). Smart people behaving foolishly.
If one wants a single scalar classification performance index, one should use the probability of correct classification or PCC or its complement, the probability of error PError= 1–PCC . Un- der some simplifying assumptions, PCC = 1⁄2 [PD + (1 – PFA)]. In addition, one should always compute a statistical confidence interval about PCC (Duda et al., 2001; Narins and Clark, 2016).
Proposals
Proposals are all about trust. When writing proposals, you should do your best to understand the point of view of the sponsor/manager. This means knowing what is important to the organization and its sponsors, including its mission and priorities. The sponsor must trust that you have in mind her/ his best interests and trust that you will deliver. I have witnessed countless people living diminished careers because they never learned to give enough thought to the priorities of their spon- sors rather than their own. Smart people behaving foolishly.
Clark’s Law of Knowing What Is Important
The most important ability one can have is the ability to know what is important (Clark, 2016).
Clark’s Law of Management Decisions
Most management decisions involve politics, so they are often based more on fear and ego than principle. The big- gest fear is looking bad (Clark, 2016).
28 | Acoustics Today | Fall 2016