Friday, May 27, 2016

SharePoint Online - replace Server-side programming with Client-side programming

Finally, with the help of WebHooks, we can use Client-side programming to replace Server-side programming in SharePoint. ( )

But, does it really resolve the problem of customization on SharePoint Online sites?

In many cases, YES, but we need to be very careful. Because "data communication" is moved from RAM-RAM to Computer-Computer.

1. Hardware Latency

Latency of communication between different processes on the same machine, is totally different from the one between different machines. Let's check it here ( ). "Main memory reference" consumes around 100 ns, and "Round trip within same data center" takes around 500 us, which means the latter is 5000 times slower.

For external servers (not in the same data center), "Round trip" may take more than 30 ms. That's 300,000 times slower.

Caching doesn't help much in many cases.

2. Stability

Let's assume that all servers are in the same data center. Is the network in a data center as robust as the RAM on one computer?

3. Extra hardware overhead

How much work it needs to handle a web service request? Please check here for "IIS Architectures"

How much extra CPU, Memory Access, DISK IO, Network IO will be consumed for each request? Do we need to pay for that?

A data center may handle one million concurrent users easily, but, when one user open a customized page, it may cause many HTTP(s) requests by the JavaScript on that page. And, each triggered workflow instance may also submit many HTTP(s) requests!

4. Development and trouble shooting

To move a workflow activity from "Server Object model" to "Client object model", for me, it's painful.

SharePoint 2013 CSOM APIs are powerful, but, because there is one more layer, it's more complex. However, this post suggests to utilize mature third-party APIs instead of "reinventing wheels". I totally don't agree about that, because of "Reliability".

5. Reliability

If everything is on-premise, for a normal middle size enterprise, they may utilize 10,000 APIs(through assemblies) from 20 different software vendors. That's fine. Everything is fully tested before deploying to production servers. Any update/patch will also be fully tested on non-production servers.

But, if there are 10,000 APIs(through Web Services) from 20 different vendors, how can we keep the whole systems stable? If, on average, each API is upgraded/changed every 10 years, then there will be 3 APIs(Web Services) being changed every day. And not likely the changes can get fully tested on non-production environment first.

In general, the quality of Microsoft products is pretty good, but, how many times Microsoft recalled updates of their products? Can we expect the software/service/APIs from those 20 vendors are all as good as the one from Microsoft? How much we need to pay for these APIs every year?

In summary, we can move everything into cloud, just need to be cautious.

No comments:

Post a Comment