Finally, with the help of WebHooks, we can use Client-side programming to replace Server-side programming in SharePoint. ( http://www.paitgroup.com/microsoft-renews-its-vows-with-sharepoint/ )
But, does it really resolve the problem of customization on SharePoint Online sites?
In many cases, YES, but we need to be very careful. Because "data communication" is moved from RAM-RAM to Computer-Computer.
1. Hardware Latency
Latency of communication between different processes on the same machine, is totally different from the one between different machines. Let's check it here ( https://gist.github.com/jboner/2841832 ). "Main memory reference" consumes around 100 ns, and "Round trip within same data center" takes around 500 us, which means the latter is 5000 times slower.
For external servers (not in the same data center), "Round trip" may take more than 30 ms. That's 300,000 times slower.
Caching doesn't help much in many cases.
Let's assume that all servers are in the same data center. Is the network in a data center as robust as the RAM on one computer?
3. Extra hardware overhead
How much work it needs to handle a web service request? Please check here for "IIS Architectures" http://www.iis.net/learn/get-started/introduction-to-iis/introduction-to-iis-architecture
How much extra CPU, Memory Access, DISK IO, Network IO will be consumed for each request? Do we need to pay for that?
4. Development and trouble shooting
To move a workflow activity from "Server Object model" to "Client object model", for me, it's painful.
SharePoint 2013 CSOM APIs are powerful, but, because there is one more layer, it's more complex. However, this post suggests to utilize mature third-party APIs instead of "reinventing wheels". I totally don't agree about that, because of "Reliability".
If everything is on-premise, for a normal middle size enterprise, they may utilize 10,000 APIs(through assemblies) from 20 different software vendors. That's fine. Everything is fully tested before deploying to production servers. Any update/patch will also be fully tested on non-production servers.
But, if there are 10,000 APIs(through Web Services) from 20 different vendors, how can we keep the whole systems stable? If, on average, each API is upgraded/changed every 10 years, then there will be 3 APIs(Web Services) being changed every day. And not likely the changes can get fully tested on non-production environment first.
In general, the quality of Microsoft products is pretty good, but, how many times Microsoft recalled updates of their products? Can we expect the software/service/APIs from those 20 vendors are all as good as the one from Microsoft? How much we need to pay for these APIs every year?
In summary, we can move everything into cloud, just need to be cautious.