The author noted that there is a flood of Post-Selection misconduct in AI, machine learning, and data processing in other scientific and engineering disciplines that use control or machine learning techniques. The Post-Selection misconduct has been well-known as P-Hacking in biology and economic sciences. Unfortunately, misconduct has penetrated almost all media-hyped AI projects. Post-selection misconduct is well-known in statistics, but the degree of performance exaggeration is limited because it is unlikely that a parametric method (e.g., mean or variance estimation) can fit the entire training data set perfectly. The situation is very different in neural network methods because greedy data-fitting by a neural network could perfectly fit any dataset. One example is the Nearest Neighbor With Threshold (NNWT) (Weng, AIEE 2023) and Pure Guess Nearest Neighbor (PGNN) (Weng, ISAIC 2022), which perfectly fits any fitting data set. Due to the Post-Selection protocol, the author selectively reports the luckiest network on a validation set while hiding all other less lucky networks on the validation set (because the author knows the answer validation set), NNWT and PGNN can always selectively report the luckiest network on the validation set. Often, when validation set and so-called test set were available, the authors simply used the test set like the test set, so that the luckiest network is from both the validation set and the so-called test set. Regardless of the accuracy values that such papers reported, they are only validation errors, not test errors, because a test does not exist. In this talk, we will see theoretical work and experimental data that demonstrate that the luckiest network on a validation set performs average in a new test (Weng ICCRC 2023, Wu IEEE CDSNL 2024). A simple numerical example is as follows: For example, Post-Selection from 5 networks results in an exaggeration rate of 40, from 0.4 to 0.01. However, during a future test where the user does not know the answer, he has no basis to cherry-pick the luckiest network. Therefore, the Post-Selection protocol should be invalid. A holistic solution (Developmental Network, DN) that totally avoids the Post-Selection misconduct is as follows. Weng established that a DN optimally approximates the distribution of the joint space of input and motor. Here, the optimality means maximum likelihood. In other words, DN only needs to train a single network and does not need to do any Post-Selection. Further, the single network is the maximum likelihood in probability and is free from the local minima problem. The maximum likelihood also employs the concept of statistical efficiency of Lobe Component Analysis (LCA, Luciw & Weng, IEEE TAMD 2009). Therefore, although AI has a crisis of Post-Selection flood, AI’s future seems to be bright. A challenge for brain-like chips is on-chip learning.