By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
434,694 Members | 1,294 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 434,694 IT Pros & Developers. It's quick & easy.

Moving server side logic to javascript..

P: n/a
Hi Everyone,

I work for a financial company. I am planning to give a presentation
to rest of the development team (15 people) here on moving server side
logic to client-side javascript for an internal intranet application
rewrite. This approach will definitely stir up hot debate from
hardcore server-side Java folks who wants to do UI stuff even on the
server!. Since I am pretty much known as the JS or UI Guy of the
group, my Boss wants to hear the broad spectrum of PROs/CONs from each
proponent.

Personally, I think Javascript/Ruby is a more productive language than
Java.

My idea is simple. It is to convert most business logic to client-
side javascript and have calls to server-side code restricted to user
roles with data validation. Thats as simple as it gets.

Here are my list of arguments

1. True separation of UI logic from server-side data processing code
(no more server code spitting out client-side code)
2. Better user experience with faster response
3. The whole web 2.0 thing (no page refresh) :)
4. Offload client processing from server therefore reducing network
traffic (not really a strong argument is this?)

Keep in mind this is an internal app. Even if someone figures out the
JS logic behind the page and try to hack the app by posting to
Servlets, they will be restricted by their login role, and data
validation will take care of any bogus data being submitted.

Any feedback greatly appreciated to help this lonely UI guy!

-Pete

Feb 12 '07 #1
Share this Question
Share on Google+
14 Replies


P: n/a
<ra*******@gmail.comwrote:
Hi Everyone,

I work for a financial company. I am planning to give a
presentation to rest of the development team (15 people)
here on moving server side logic to client-side javascript
for an internal intranet application rewrite.
This approach will definitely stir up hot debate from
hardcore server-side Java folks who wants to do UI stuff
even on the server!. Since I am pretty much known as the
JS or UI Guy of the group, my Boss wants to hear the broad
spectrum of PROs/CONs from each proponent.

Personally, I think Javascript/Ruby is a more productive
language than Java.
Well don't even mention that (particularly to your Java programmers) or
you will find yourself not being taken seriously at all.
My idea is simple. It is to convert most business logic to
client- side javascript and have calls to server-side code
restricted to user roles with data validation. Thats as
simple as it gets.
While (if they are any good) the server-side Java programmers will
already have the business logic in the form of re-useable components and
will question the cost of re-creating what already exists. Or they will
point out that if they implement new business logic they will be able to
easily re-use it later projects. Or they will point out that in order to
validate on the server they will still need most of the business logic on
the server so anything new will be being written twice.
Here are my list of arguments

1. True separation of UI logic from server-side data
processing code (no more server code spitting out
client-side code)
Didn't you just say you were planning on putting the business logic on
the client?

Putting all the user-interface code on the client makes sense (though it
is not always practical: consider sorting a table of 400,000 transactions
on the client. That is not going to happen).

Consider this 'separation' carefully. The thing to go for is a situation
where the server doesn't care about the specifics of the client (web
browser, desktop client, etc.) and the client doesn't care about the
technology running on the server (Java, ASP, .NET, Ruby, etc). That
separation would be about how the two communicated (SOAP (web services),
custom XML, JSON or whatever, and what messages/data they transmitted).
Pitch the separation at that point and you are building UI components for
a browser that can be used with any similar communication interface, and
the server code can provide the same interface to any client.
2. Better user experience with faster response
That doesn't necessarily follow, as the odds are client desktop machines
are of varying capability and include many that are not that new and not
necessarily due for an upgrade in the near future, while a brand new dual
or quad, mult-core CPU server with 4GB RAM and the latest hard disks
(with appropriate RAID spanning) can have the server side code flying,
organisation wide, for $4,000 or so (they really have got very cheep over
the last year or so).
3. The whole web 2.0 thing (no page refresh) :)
Buzzwords are not a reason for making strategic or architecturally
decisions.

If you mention it be ready to be asked precisely what "web 2.0" means,
and to be ridiculed if all you say is "no page refresh".

Have you any practical experience of designs where the page is never
refreshed? Odds are we are talking about Windows 2000 desktop machines
and so no opportunity to upgrade IE 6 to IE 7 (which is far better in
this respect) and so the need to put a great deal of work into not having
IE 6 gradually accumulate ever more of the client PC's memory, and
operate slower and slower as time goes on.
4. Offload client processing from server therefore reducing
network traffic (not really a strong argument is this?)
As I implied above, the need to offload work form the server is
diminishing, and modern networks can handle the traffic (particularly if
HTTP compression is employed).
Keep in mind this is an internal app. Even if someone figures
out the JS logic behind the page and try to hack the app by
posting to Servlets, they will be restricted by their login
role, and data validation will take care of any bogus data
being submitted.
So you will not be selling the server-side Java programmers on the idea
of them having less work to do, as they will have to repeat much of what
you do on the client in order to validate whatever is submitted.
Any feedback greatly appreciated to help this lonely UI guy!
I would have thought (and particularly in the context of a financial
institution, and for an internal application) the case to make would be a
financial one. The impact of potentially increased/decreased productivity
of the users of the system, with and against the cost/benefit of R&D,
design, implementation, ongoing maintenance, and their knock-on effects
for future projects.

Richard.

Feb 12 '07 #2

P: n/a
On Feb 12, 7:56 am, "rabbit...@gmail.com" <rabbit...@gmail.comwrote:
>
I work for a financial company. I am planning to give a presentation
to rest of the development team (15 people) here on moving server side
logic to client-side javascript for an internal intranet application
rewrite. This approach will definitely stir up hot debate from
hardcore server-side Java folks who wants to do UI stuff even on the
server!. Since I am pretty much known as the JS or UI Guy of the
group, my Boss wants to hear the broad spectrum of PROs/CONs from each
proponent.

Personally, I think Javascript/Ruby is a more productive language than
Java.

My idea is simple. It is to convert most business logic to client-
side javascript and have calls to server-side code restricted to user
roles with data validation. Thats as simple as it gets.

Here are my list of arguments

1. True separation of UI logic from server-side data processing code
(no more server code spitting out client-side code)
2. Better user experience with faster response
3. The whole web 2.0 thing (no page refresh) :)
4. Offload client processing from server therefore reducing network
traffic (not really a strong argument is this?)

Keep in mind this is an internal app. Even if someone figures out the
JS logic behind the page and try to hack the app by posting to
Servlets, they will be restricted by their login role, and data
validation will take care of any bogus data being submitted.

Any feedback greatly appreciated to help this lonely UI guy!
I agree with Richard. JavaScript may be more productive than Java but
not if the Java code already exists. Validation code should never
leave the server and so will need duplication.

However learning and implementing the JavaScript to do the things you
describe is still fun and interesting. There may be a way to speed up
parts of the application but the whole site (if it is a big one) as
one pages seems like overdoing a good thing.

The URLs of these one-page apps are are bookmarkable. The Yahoo! use
is good. The backbase use is probably not.

http://maps.yahoo.com/
http://www.backbase.com/

Here is an interesting client-side MVC architecture experiment (that
is buggy)

http://trimpath.com/project/wiki/SteveYen

Peter

Feb 12 '07 #3

P: n/a
VK
On Feb 12, 6:56 pm, "rabbit...@gmail.com" <rabbit...@gmail.comwrote:
Personally, I think Javascript/Ruby is a more productive language than Java.
I wouldn't put Javascript and Ruby on Rails as one _language_ unit.
Javascript is a language, Ruby is a language, Ruby on Rails is a known
Web framework, client-side dependant on Javascript.

On the client-side Javascript is indeed much more productive and
stable than Java applet.

It is also normally much cheaper in development: in my area for
instance for $35/h you are getting a happy experienced enough working
bee - when with Java it will cost you $55/h only for someone having
take a look on your needs and telling you how much it will really
cost ;-) Pretty much the same math remains for other client-side vs
server-side decisions. This math is actually the main behind reason of
the furious arguments every time a move like yours is approaching. So
be ready to all kind of arguments up to "the company will collapse"
and "God will have no mercy on you" :-)
My idea is simple. It is to convert most business logic to client-
side javascript and have calls to server-side code restricted to user
roles with data validation. Thats as simple as it gets.
OK
Here are my list of arguments

1. True separation of UI logic from server-side data processing code
(no more server code spitting out client-side code)
2. Better user experience with faster response
3. The whole web 2.0 thing (no page refresh) :)
4. Offload client processing from server therefore reducing network
traffic (not really a strong argument is this?)

Keep in mind this is an internal app. Even if someone figures out the
JS logic behind the page and try to hack the app by posting to
Servlets, they will be restricted by their login role, and data
validation will take care of any bogus data being submitted.

Any feedback greatly appreciated to help this lonely UI guy!
A good objective, clear arguments.

Client-side solution imposes client-side scripting enabled - at least
for the given domain. If you need to support IE prior 7.0 for AJAX
solutions it also implies ActiveX support enabled - at least for the
given domain. Check with administrator that the settings are correct -
or can be corrected - for all involved machine.

This is the only technical consideration coming into my mind right
away.

Feb 12 '07 #4

P: n/a
Thank you for all the wonderful suggestion and feedback.

With the web 2.0 buzz, my boss has definitely bought into the user
experience thing (drag/drop, type-ahead completion). He has already
sold ideas of better apps to other department stakeholders.

To build more of these Ajax style applications going forward, I think
its necessary to have "clean" and "separated" layers of client browser
code and server code.

I think MVC is really a big cloud of confusion in implementation.
Here is my recommended architecture:

Client Code handles:
Server Code
Business logic on the client-side will be duplicated, but since this
is a rewrite of an old year 2000 app with messy JSPs various .
Feb 12 '07 #5

P: n/a
Oooops hit wrong button...

Thank you for all the wonderful suggestion and feedback.

With the web 2.0 buzz, my boss has definitely bought into the user
experience thing (drag/drop, type-ahead completion). He has already
sold ideas of better apps to other department stakeholders.
To build more of these Ajax style applications going forward, I think
its necessary to have "clean" and "separated" layers of client
browser
code and server code.
I think MVC is really a big cloud of confusion in implementation. I
believe the controller is the user. After all, he/she is the one
pushing the buttons and clicking links.
Here is my recommended architecture:
1) VIEW and CONTROLLER handled by client code.
- UI patterns
- html form backed with business logic
- navigation

2) The MODEL is handled by server side code.
- data access api via URLs for example:
- /AddRecord
- /UpdateRecord
- /RemoveRecord
- /ListRecords

3) All client and server communication via JSON strings (XML is too
heavy and fat).
Main Benefits:

1. The cool thing here is that the server developer is only focused on
writing model code, data connections, data validation and integrity,
and services.
2. Server code doesn't break front-end because server code does not
have embedded JS code.
3. Server developer can work independently from client developer!!

Pete



Feb 12 '07 #6

P: n/a
Putting all the user-interface code on the client makes sense (though it
is not always practical: consider sorting a table of 400,000 transactions
on the client. That is not going to happen).
It would not be good interface design to display 400K rows. Part of a
good UI design is the presentation of relevant content in summarized
format to easily assist in making decisions, or provide a simple
facility to find records within a 400K table easily (think record
filter over a paged table).

and yes JS has its limitations speedwise being 50 times slower than
compiled Java.

Feb 12 '07 #7

P: n/a
VK
On Feb 12, 10:46 pm, "rabbit...@gmail.com" <rabbit...@gmail.com>
wrote:
I think MVC is really a big cloud of confusion in implementation.
Indeed: because majorly it remains a subject of pleaseful talks for
boss. When it comes to an actual implementation, many teams just fall
back to a twisted around server-client schema which they used to:
where global javascript context becomes "server" and page controls and
display areas become "clients". The most ugly sample of this practice
remains IXMLHTTPRequest (XMLHttpRequest later). It was brute force
pulled out from the Microsoft data binding concept - to be used as
some kind of stay-alone internal browser to manually pull data from
the server and to manually destribute it by "clients" (control and
display areas on the page). It took well two years to come back to
semi-normal usage - and still long way to go.
Client Code handles:
In terms of MVC not a client code but user interface elements. They
implement this or that logic - behavior and they delegate a part of
logic to the server-side data processor.

Feb 12 '07 #8

P: n/a
<ra*******@gmail.comwrote:
>Putting all the user-interface code on the client makes sense
(though it is not always practical: consider sorting a table
of 400,000 transactions on the client. That is not going to happen).

It would not be good interface design to display 400K rows.
<snip>

No, it would be insane. But if the user is looking at Jan 2nd to Feb 23rd
sorted by date and decides they want to switch to viewing the table
sorted by credit balance you can only make that switch on the client if
you do have all 400k recodes on the client (or fetch the rest at that
point), which means you cannot contemplate doing that on the client at
all.

Richard.

Feb 12 '07 #9

P: n/a
"VK" <sc**********@yahoo.comwrote:
On Feb 12, 6:56 pm, <ra*******@gmail.comwrote:
<snip>
It is also normally much cheaper in development: in my area
for instance for $35/h you are getting a happy experienced
enough working bee - when with Java it will cost you $55/h
only for someone having take a look on your needs and telling
you how much it will really cost ;-) Pretty much the same math
remains for other client-side vs server-side decisions.
<snip>

LOL. Coming from the person who wrote:-

<URL:
http://groups.google.com/group/comp....20fbcd4b4ab7f8
>
-and:-

<URL:
http://groups.google.com/group/comp....869add6d8dfcad
>
- paying an extra $20/h to get someone who can do some analysis and write
code that isn't full of holes is going to be the cheaper option in the
long run. (Pay peanuts; get monkeys.)

Richard.

Feb 12 '07 #10

P: n/a
On Feb 12, 10:42 am, "Peter Michaux" <petermich...@gmail.comwrote:
On Feb 12, 7:56 am, "rabbit...@gmail.com" <rabbit...@gmail.comwrote:


I work for a financial company. I am planning to give a presentation
to rest of the development team (15 people) here on moving server side
logic to client-side javascript for an internal intranet application
rewrite. This approach will definitely stir up hot debate from
hardcore server-side Java folks who wants to do UI stuff even on the
server!. Since I am pretty much known as the JS or UI Guy of the
group, my Boss wants to hear the broad spectrum of PROs/CONs from each
proponent.
Personally, I think Javascript/Ruby is a more productive language than
Java.
My idea is simple. It is to convert most business logic to client-
side javascript and have calls to server-side code restricted to user
roles with data validation. Thats as simple as it gets.
Here are my list of arguments
1. True separation of UI logic from server-side data processing code
(no more server code spitting out client-side code)
2. Better user experience with faster response
3. The whole web 2.0 thing (no page refresh) :)
4. Offload client processing from server therefore reducing network
traffic (not really a strong argument is this?)
Keep in mind this is an internal app. Even if someone figures out the
JS logic behind the page and try to hack the app by posting to
Servlets, they will be restricted by their login role, and data
validation will take care of any bogus data being submitted.
Any feedback greatly appreciated to help this lonely UI guy!

I agree with Richard. JavaScript may be more productive than Java but
not if the Java code already exists. Validation code should never
leave the server and so will need duplication.

However learning and implementing the JavaScript to do the things you
describe is still fun and interesting. There may be a way to speed up
parts of the application but the whole site (if it is a big one) as
one pages seems like overdoing a good thing.

The URLs of these one-page apps are are bookmarkable. The Yahoo! use
is good. Thebackbaseuse is probably not.

http://maps.yahoo.com/http://www.backbase.com/

Here is an interesting client-side MVC architecture experiment (that
is buggy)

http://trimpath.com/project/wiki/SteveYen

Peter
Hi Peter,

Can you explain why you think the Backbase use of bookmarking is not
good? Backbase works with Fortune 500 companies that want to add Ajax
to their Java apps (such as Visa, Vanguard, etc.) so it's certainly a
valid AJAX solution. Let me know if you have any questions about
Backbase, or the Backbase website.

Jep (Backbase)
Feb 12 '07 #11

P: n/a
VK
On Feb 13, 1:20 am, "Richard Cornford" <Rich...@litotes.demon.co.uk>
wrote:
LOL. Coming from the person who wrote:-

<URL:http://groups.google.com/group/comp.lang.javascript/msg/2820fbcd4b4ab7f8>
With your historiography researches you are getting annoying to the
point of being funny - and that's a bad point, trust me.

I needed Vector/List type and I wrote one. With the help of posters
John G Harris and especially Ray I further optimized it and it is in
active use by now. The majority of posts in that thread was indeed in
the old-clj style of a kind "Why do you need Vector if don't need a
Vector?" Usually it is not anyone fricken business why OP wants to do
this or that - unless there is a suspicion of an illegal activity.
"- I want it because I had a vision to do it by the end of the week or
else the little creature in my head will eat my brains" - anyone may
default such answer w/o asking if really dying out of curiosity.

As an exception, explaining: Vector-like object is very useful for 3D
vector drawing with multiple shapes. Instead of handling Z-index it is
_times_ quicker to keep a Vector with shapes references so to draw
them in order of current position. Vector (unlike Array) provides
effective tools for run-time updates and especially inter-layer
transfers.

If it's still not enough of reason then feel free to go to hell from
this point forward.

Of course it is all in a "direct connection" with the OP's question...

Feb 12 '07 #12

P: n/a
VK <sc**********@yahoo.comwrote:
On Feb 13, 1:20 am, "Richard Cornford" wrote:
>LOL. Coming from the person who wrote:-

<URL:
http://groups.google.com/group/comp....20fbcd4b4ab7f8 >

With your historiography researches you are getting
annoying to the point of being funny - and that's a
bad point, trust me.
What is up, are you worried that people will stop taking you seriously if
they find out how inept you are? Don't worry, we know, and we never did.
I needed Vector/List type and I wrote one.
What you wrote was a very long way from being either a Vector or a List.
It would be pressing it to describe it as some sort of data mangling
object. Still, you were proud enough of it to show it off in public.
With the help of posters John G Harris and especially
Ray I further optimized it
LOL. You were spoon-fed the correct code for the array manipulation and
you still made a pig's ear of the implementation.

Turing something that does not work at all into something that works is
stretching the meaning of "optimised" far beyond reason.
and it is in active use by now.
And as you had not noticed that there was anything wrong with it until it
was pointed out and you were corrected the odds are good that it would
have been in active use anyway, and resulting in that chaotic, bug-ridden
code that is the basis for your reputation here.
The majority of posts in that thread was indeed in
the old-clj style of a kind "Why do you need Vector if
don't need a Vector?" Usually it is not anyone fricken
business why OP wants to do this or that
You were the OP in that thread, and you posted it to 'show off' your
'skills' after the ridicule your first Vector attempt elicited:-

<URL:http://groups.google.com/group/comp....7a1607e71be9fa
>
- unless there is a suspicion of an illegal activity.
Being completely incapable of any analysis yourself I can see why you may
see no need to understand the context of a question before it can receive
a reasonable answer. In the real world the 'why' is often the quickest
rout to the best 'how'.
"- I want it because I had a vision to do it by the end
of the week or else the little creature in my head will
eat my brains" - anyone may default such answer w/o asking
if really dying out of curiosity.
And in this thread you have wasted everyone's time going on about
enabling script and ActiveX when the OP clearly sated that the context
was an internal application. So that is failing to see the need to ask
the pertinent questions and failing to comprehend the information already
provided.
As an exception, explaining: Vector-like object is very
useful for 3D vector drawing with multiple shapes. Instead
of handling Z-index it is _times_ quicker to keep a Vector
with shapes references so to draw them in order of current
position. Vector (unlike Array) provides effective tools
for run-time updates and especially inter-layer transfers.
Ah, right. That explains why you implemented it with a series of one or
two statement wrappers making method calls to an underlying Array object
(or at least failed to do that until you were spoon-fed the correct Array
method calls to use).
If it's still not enough of reason then feel free to go to
hell from this point forward.
<snip>

What, again?

Richard.

Feb 12 '07 #13

P: n/a
VK
On Feb 13, 2:45 am, "Richard Cornford" <Rich...@litotes.demon.co.uk>
wrote:
And in this thread you have wasted everyone's time going on about
enabling script and ActiveX when the OP clearly sated that the context
was an internal application.
As the only on-topic part I answer it:

"an internal application" doesn't mean that all communication goes by
file:// links or \\JohnDoe\Shared\ requests.

A corporate server is often is a regular DNS server like http://www.example.com
which is external for side users and intranet one for internal users.
Sometimes admins are overly agressive or neglegeable so http://
www.example.com may fail into common restricted category.
It is not a technical obstacle of any kind - it is just a normal
precaution check on the deployment stage. As well as to ensure that
all machines are running acceptable high version of MSXML library - if
IE-based environment - and that jscript.dll version was not overriden
by occasion by a lower one - if Windows 2000 based environment atop of
everything.
Feb 13 '07 #14

P: n/a
On Feb 12, 2:32 pm, jepcastel...@gmail.com wrote:
On Feb 12, 10:42 am, "Peter Michaux" <petermich...@gmail.comwrote:
The URLs of these one-page apps are are bookmarkable. The Yahoo! use
is good. Thebackbaseuse is probably not.
http://maps.yahoo.com/
http://www.backbase.com/
Hi Peter,

Can you explain why you think the Backbase use of bookmarking is not
good? Backbase works with Fortune 500 companies that want to add Ajax
to their Java apps (such as Visa, Vanguard, etc.) so it's certainly a
valid AJAX solution. Let me know if you have any questions about
Backbase, or the Backbase website.
I certainly think the bookmarkable URLs in backbase are very
interesting. In the case of the Yahoo! site the hash describes which
part of the world map is centered and the magnification. To me this is
clearly a one page application looking at the world with local state.
The location hash is intended to describe local state.

With the Backbase site the various pages look and feel like old-
fashioned static pages. Only the way these pages load is different. It
seems to me on the wrong side of line in the grey area for how a URL
describes a resource. I wasn't refering the the idea of the
bookmarkable URLs, but rather using them for an information-based site
like the Backbase site. It is just curiosity but how do search engines
rank pages where information is in the hash and not like a usual URL?
Are sites built on the Backbase technology as easy for search engines
to crawl?

Safari and Opera are smaller market share but still are two of the
main four browsers out there. The Backbase website is like a static
site with those browsers on my computer. Safari and Opera are both
very capable browsers. A public Internet website that doesn't work in
basically the same way in IE6/7, O8/9, S2, FF1.5/2 (at least) seems
like a loss to me. What is limiting the hash style URLs with these two
browsers? I know Safari has a problem programatically setting the
location.hash property. At a quick look, the Yahoo! site doesn't seem
to suffer because of this. The problem seems to be fixed in recent
Webkit builds. I have thought about a few ways to get around this
problem in Safari. I haven't tried them yet.

Peter

Feb 13 '07 #15

This discussion thread is closed

Replies have been disabled for this discussion.