If accuracy and efficiency were the sole goals when bringing a software product to market, then there'd be no issue.
But as bwar alluded to earlier, there's a time-to-market component that can't be ignored. Faster to market means more money over the lifecycle of the product. And also the more time spent on it, the more expensive the final product.
So like anything else, there's a tradeoff between quality, and timeliness. More efficient code, more accurate code, better tested code-- these are all desirable things. But if the tradeoff in time to market and/or production cost is too high, then it's not worth the effort.
And now that compute and storage and most other hardware factors are so large and powerful, the need to create small, efficient, high quality code, is diminished.
Yep.
And note that this can somewhat be a "bringing a software product to market" statement, that does NOT generalize to all software.
For ML, I think efficiency is extremely important, especially as the data size scales. Mike may know more about this than I do, but if you're testing iterative algorithm changes, and you can bring the time to analyze your data set from 48 hours to 24, you can test twice as much in the same period of time, and learn more than you would with fewer iterations.
Another one that is highly important is in cloud computing. Often when people think of "software", they think of a "program" running on a "computer". But the massive increase in computing power has meant that this isn't really the case any more.
We have virtualization where you might have a very high number of "computers" running on one "computer". What that means is that a single server will be operating multiple "virtual machines" where it's basically creating a software-virtualized "computer" and operating system that you can run an application that--for all it knows--thinks it's being run on a single PC. Once you start doing this, efficiency becomes very important again. Especially if you're paying for the compute resources from a cloud compute provider.
This is then extended by containerization. This is where you take certain functions that perhaps need to be separated from each other, but at the same time you may need hundreds or thousands of them going on at any given time. Think of something like Ticketmaster when they're selling concert tickets. You may have 5,000 individual users logged in searching for tickets, and each search will be a unique experience to that user that includes the amount of time tickets they select are held for payment, their process of going through the order / credit card / etc aspect. "Containers" are used to basically replicate that process many hundreds or thousands of times at once, while also making each one independent of all the others b/c you don't want a bug or issue where suddenly you and I are both buying tickets at the same time and a glitch means I get your front-row tickets but my CC is charged my nosebleed price, and you get my nosebleed seats but your CC gets charged your front-row price. If you're doing one of something, efficiency doesn't matter. If you're doing hundreds or thousands of that same thing at once across your hardware... Efficiency is critical.
So it's not meant to be a blanket statement. It's more a statement that if the [in]efficiency of your code is someone else's problem (i.e. it's on someone else's computer), it's not anywhere near as important to you as a developer as if you're going to be the one paying for the computing power to run it at scale, whether that's on-premises or via a cloud computing service.