Reasons for a 3-tier achitecture for the WEB?
(NOTE: I said, WEB, NOT WINDOWS.
DON'T shoot your mouth off if you don't understand the difference.)
I hear only one reason and that's to switch a database from SQL Server to
Oracle or DB2 or vice versa... and that's it.... And a lot of these
enterprises don't need it as they already know what database they are going
to use and they don't plan on switching in and out database in the first
place, NOR can they afford to in the first place
Nobody switches databases everyday yet Microsoft and MVP's are recommending
these so called best practices for every single .NET implementation for the
enterprise or the mom and pop....Sort of like Windows Advanced Server and
Windows Server....everybody's got to have Advanced server on their
LAPTOP.....
I mean come on. Let Microsoft fix their own stuff FIRST before recommending
any best practices.
Also see no reason to have a business logic tier as that can be easily
contained in the code behind.
QUESTION OF THE DAY:
I would really like to know, who's idea it was to have 3-tier architecture
for .NET Web Pages. All I see is a carry over of practices from DNA using
COM and .asp pages.....
3-tier is using the same OLD broken tools when a new technology, .NET, that
has better and more simplier ways of doing things.
I say there is some hidden motive (like job security) for making things more
complicated and LESS performing when using 3-Tier or N-Tier.
There are a LOT of mind-numb robot MVP's and Microsoft employees who never
question if a particular and well-entrenched way is the best way.......
It's been 2 years and Mr. Bill is still saying things are going slow.....I
wonder why...... 7 1031
Hi,
Multi-tier architectures proved to be reasonable for considerably big
projects where they facilitate maintenance and further development (have you
ever tried to debug a huge code-behind class with thousands of lines of
code?).
It is just easier (well, for some of us, at least :-) to work with logically
separated parts of code - this one works with the database, this one handles
business rules and that one renders user interface.
By the way, the "business logic" tier, in my opinion, is reasonable only
when there are many complex business rules to enforce. This tier can be
omitted for web sites that merely store and display data, but do not perform
any sophisticated data processing.
As for switching the databases...well..I would agree that it happens with
very low probability, but if that happens and all your database code is
spread across the code behind - long boring hours of monkey job are
guaranteed.
--
Dmitriy Lapshin [C# / .NET MVP]
X-Unity Unit Testing and Integration Environment http://x-unity.miik.com.ua
Deliver reliable .NET software
"nospam" <n@ntspam.com> wrote in message
news:#l**************@TK2MSFTNGP10.phx.gbl... Reasons for a 3-tier achitecture for the WEB? (NOTE: I said, WEB, NOT WINDOWS.
DON'T shoot your mouth off if you don't understand the difference.)
I hear only one reason and that's to switch a database from SQL Server to Oracle or DB2 or vice versa... and that's it.... And a lot of these enterprises don't need it as they already know what database they are
going to use and they don't plan on switching in and out database in the first place, NOR can they afford to in the first place
Nobody switches databases everyday yet Microsoft and MVP's are
recommending these so called best practices for every single .NET implementation for
the enterprise or the mom and pop....Sort of like Windows Advanced Server and Windows Server....everybody's got to have Advanced server on their LAPTOP.....
I mean come on. Let Microsoft fix their own stuff FIRST before
recommending any best practices.
Also see no reason to have a business logic tier as that can be easily contained in the code behind.
QUESTION OF THE DAY: I would really like to know, who's idea it was to have 3-tier architecture for .NET Web Pages. All I see is a carry over of practices from DNA using COM and .asp pages.....
3-tier is using the same OLD broken tools when a new technology, .NET,
that has better and more simplier ways of doing things.
I say there is some hidden motive (like job security) for making things
more complicated and LESS performing when using 3-Tier or N-Tier.
There are a LOT of mind-numb robot MVP's and Microsoft employees who never question if a particular and well-entrenched way is the best way.......
It's been 2 years and Mr. Bill is still saying things are going slow.....I wonder why......
Also, there's one thing that you're missing - the service
oriented architecture. If you place code in the code
behind page, it can't be called from another process
without loading up the entire web page, transferring to
that site, etc. Using middle tier logic enables you to
hold common services for use across many applications.
It's not just all based on database issues.
Jeff Levinson
Author of "Building Client/Server Applications with
VB.NET: An Example Driven Approach" -----Original Message----- Reasons for a 3-tier achitecture for the WEB? (NOTE: I said, WEB, NOT WINDOWS.
DON'T shoot your mouth off if you don't understand the
difference.) I hear only one reason and that's to switch a database
from SQL Server toOracle or DB2 or vice versa... and that's it.... And a
lot of theseenterprises don't need it as they already know what
database they are goingto use and they don't plan on switching in and out
database in the firstplace, NOR can they afford to in the first place
Nobody switches databases everyday yet Microsoft and
MVP's are recommendingthese so called best practices for every single .NET
implementation for theenterprise or the mom and pop....Sort of like Windows
Advanced Server andWindows Server....everybody's got to have Advanced
server on theirLAPTOP.....
I mean come on. Let Microsoft fix their own stuff FIRST
before recommendingany best practices.
Also see no reason to have a business logic tier as that
can be easilycontained in the code behind.
QUESTION OF THE DAY: I would really like to know, who's idea it was to have 3-
tier architecturefor .NET Web Pages. All I see is a carry over of
practices from DNA usingCOM and .asp pages.....
3-tier is using the same OLD broken tools when a new
technology, .NET, thathas better and more simplier ways of doing things.
I say there is some hidden motive (like job security)
for making things morecomplicated and LESS performing when using 3-Tier or N-
Tier. There are a LOT of mind-numb robot MVP's and Microsoft
employees who neverquestion if a particular and well-entrenched way is the
best way....... It's been 2 years and Mr. Bill is still saying things
are going slow.....Iwonder why...... .
Just how many lines of code in a Code Behind Page are too much?
I have seen thousands and thousands of lines of code in a business tier as
well, I see no difference.
I would also like to know the actual business SET of rules where this would
happen for each and every single web page.
I can think of only a few cases(i.e. web pages) where this might be,
(1.) Placing an order for like a shopping cart
(2.) Credit approval
(3.) Insurance approval
But that's it.
I see no reason to put the (1) login or (2) search (3)Order Details or the
rest of the web site all in a business tier.
People talk about these "so-called" business rules as if it's in every
single page of the web site. yet that's not true and only increases the
complexity of the web site. It is just easier (well, for some of us, at least :-) to work with
logically separated parts of code - this one works with the database, this one
handles business rules and that one renders user interface.
Can you explain, "just why is it easier"? Like is there some feature I am
missing? I coded more pages and sites than any of the top coders I know,
database, business logic and all and possibly more than all of them put
together.
What about all the time you spend trying to figure out grey area rules?
Like is this really in this layer or that layer, or is it really a little
bit of both. And then trying to Later figure out where it is 2 months from
now.
"Dmitriy Lapshin [C# / .NET MVP]" <x-****@no-spam-please.hotpop.com> wrote
in message news:ec**************@tk2msftngp13.phx.gbl... Hi,
Multi-tier architectures proved to be reasonable for considerably big projects where they facilitate maintenance and further development (have
you ever tried to debug a huge code-behind class with thousands of lines of code?).
It is just easier (well, for some of us, at least :-) to work with
logically separated parts of code - this one works with the database, this one
handles business rules and that one renders user interface.
By the way, the "business logic" tier, in my opinion, is reasonable only when there are many complex business rules to enforce. This tier can be omitted for web sites that merely store and display data, but do not
perform any sophisticated data processing.
As for switching the databases...well..I would agree that it happens with very low probability, but if that happens and all your database code is spread across the code behind - long boring hours of monkey job are guaranteed.
-- Dmitriy Lapshin [C# / .NET MVP] X-Unity Unit Testing and Integration Environment http://x-unity.miik.com.ua Deliver reliable .NET software
"nospam" <n@ntspam.com> wrote in message news:#l**************@TK2MSFTNGP10.phx.gbl... Reasons for a 3-tier achitecture for the WEB? (NOTE: I said, WEB, NOT WINDOWS.
DON'T shoot your mouth off if you don't understand the difference.)
I hear only one reason and that's to switch a database from SQL Server
to Oracle or DB2 or vice versa... and that's it.... And a lot of these enterprises don't need it as they already know what database they are going to use and they don't plan on switching in and out database in the first place, NOR can they afford to in the first place
Nobody switches databases everyday yet Microsoft and MVP's are recommending these so called best practices for every single .NET implementation for the enterprise or the mom and pop....Sort of like Windows Advanced Server
and Windows Server....everybody's got to have Advanced server on their LAPTOP.....
I mean come on. Let Microsoft fix their own stuff FIRST before recommending any best practices.
Also see no reason to have a business logic tier as that can be easily contained in the code behind.
QUESTION OF THE DAY: I would really like to know, who's idea it was to have 3-tier
architecture for .NET Web Pages. All I see is a carry over of practices from DNA
using COM and .asp pages.....
3-tier is using the same OLD broken tools when a new technology, .NET, that has better and more simplier ways of doing things.
I say there is some hidden motive (like job security) for making things more complicated and LESS performing when using 3-Tier or N-Tier.
There are a LOT of mind-numb robot MVP's and Microsoft employees who
never question if a particular and well-entrenched way is the best way.......
It's been 2 years and Mr. Bill is still saying things are going
slow.....I wonder why......
I can only think of one thing that is maybe common, and that's validation.
Other than that, there is nothing else.
Plus, there is a trade off on encapsulation as well and not to mention
reliabilty and maintenance.
Having code HERE and THERE amounts to "spaghetti" code.
Any custom code that is called more than once is also a single point of
failure and also each and every web page that uses it must be tested for
Quality Assurance. It seems like the time saved in hand coding changes are
easily lost in QA. Then is this myth about the "2 second maintenance
change" that never, ever is really 2 seconds.
"Jeff Levinson [mcsd]" <je***********@comcast.net> wrote in message
news:03****************************@phx.gbl... Also, there's one thing that you're missing - the service oriented architecture. If you place code in the code behind page, it can't be called from another process without loading up the entire web page, transferring to that site, etc. Using middle tier logic enables you to hold common services for use across many applications. It's not just all based on database issues.
Jeff Levinson
Author of "Building Client/Server Applications with VB.NET: An Example Driven Approach"
-----Original Message----- Reasons for a 3-tier achitecture for the WEB? (NOTE: I said, WEB, NOT WINDOWS.
DON'T shoot your mouth off if you don't understand the difference.) I hear only one reason and that's to switch a database
from SQL Server toOracle or DB2 or vice versa... and that's it.... And a lot of theseenterprises don't need it as they already know what database they are goingto use and they don't plan on switching in and out database in the firstplace, NOR can they afford to in the first place
Nobody switches databases everyday yet Microsoft and MVP's are recommendingthese so called best practices for every single .NET implementation for theenterprise or the mom and pop....Sort of like Windows Advanced Server andWindows Server....everybody's got to have Advanced server on theirLAPTOP.....
I mean come on. Let Microsoft fix their own stuff FIRST before recommendingany best practices.
Also see no reason to have a business logic tier as that can be easilycontained in the code behind.
QUESTION OF THE DAY: I would really like to know, who's idea it was to have 3- tier architecturefor .NET Web Pages. All I see is a carry over of practices from DNA usingCOM and .asp pages.....
3-tier is using the same OLD broken tools when a new technology, .NET, thathas better and more simplier ways of doing things.
I say there is some hidden motive (like job security) for making things morecomplicated and LESS performing when using 3-Tier or N- Tier. There are a LOT of mind-numb robot MVP's and Microsoft
employees who neverquestion if a particular and well-entrenched way is the best way....... It's been 2 years and Mr. Bill is still saying things
are going slow.....Iwonder why...... .
"nospam" <n@ntspam.com> wrote in message
news:#l**************@TK2MSFTNGP10.phx.gbl... Reasons for a 3-tier achitecture for the WEB? (NOTE: I said, WEB, NOT WINDOWS.
DON'T shoot your mouth off if you don't understand the difference.)
I hear only one reason and that's to switch a database from SQL Server to Oracle or DB2 or vice versa... and that's it.... And a lot of these enterprises don't need it as they already know what database they are
going to use and they don't plan on switching in and out database in the first place, NOR can they afford to in the first place
Nobody switches databases everyday yet Microsoft and MVP's are
recommending these so called best practices for every single .NET implementation for
the enterprise or the mom and pop....Sort of like Windows Advanced Server and Windows Server....everybody's got to have Advanced server on their LAPTOP.....
If you had sold a product to companies, you'll easily find out that the
database implementation is not always as you had planned. The same product
can be be deployed in different companies using different RDBMS products
(well, if you don't want to, you've just limited yourself to a few clients).
Not to mention those "Mom and Pop" shops that can only afford access, and
later on outgrow the system and would demand a robust RDBMS. Well, that's
one reason. In some cases (specially tracing and logging), the persistent
store is not always a database. It can be a flat file, your email service
or logging objects in windows. Wouldn't you rather write a design where you
can just "switch" your application to use any of these objects that logs
information differently. I know you're thinking, why in the world would you
choose to log information in a flat file when you have a database or
whatever. Depending on who you are... in today's distributed
envinronments, technical support can be in far flung India while the program
is running in Iraq. Are you going to send the entire database image to them
so they can analyze a problem that occurred in the last 5 minutes? I can
think of so many reasons why you should separate your "Data Layer" from the
other "Layers."
I mean come on. Let Microsoft fix their own stuff FIRST before
recommending any best practices.
I agree on this one but this does not have any bearing on your question of
"3 Tier Architecture."
Also see no reason to have a business logic tier as that can be easily contained in the code behind.
Again, imagine an application where you have to write an interface for PDA's
and full browser like I.E. Coding the business rules for pages that supply
information in a PDA and another for I.E. is just too much to do. I don't
know about you, but I would rather spend more time upgrading my system than
doing interfaces (with built in business rules) so many times, depending on
how many client interfaces I have to deal with. Just consider if you have
to write a "Touch Tone" interface to your application as well.
QUESTION OF THE DAY: I would really like to know, who's idea it was to have 3-tier architecture for .NET Web Pages. All I see is a carry over of practices from DNA using COM and .asp pages.....
This idea was definitely not from Microsoft. But the idea behind n-tier
architecture is code reuse. Unless, you haven't heard of it.
3-tier is using the same OLD broken tools when a new technology, .NET,
that has better and more simplier ways of doing things.
N-Tier development is an architectural design that allows a development team
to work together by establishing standards before the first code is ever
written. That way, you can avoid the saying, "Too many chefs spoils the
broth."
I say there is some hidden motive (like job security) for making things
more complicated and LESS performing when using 3-Tier or N-Tier.
Even in non n-tier designs, Job security has always been a motive. It isn't
even hidden anymore. Don't you think so? I wonder why so many people are
still in denial that job security is a goal.
There are a LOT of mind-numb robot MVP's and Microsoft employees who never question if a particular and well-entrenched way is the best way.......
That's because it's a proven "practice." I don't even call it a design
anymore. Code behind leads to too much duplicate code all over the
application. But since you prefer to do things the long way... good luck
investing more time in writing a simple "logon" screen. By the way, the
tools that you get out there to write "components" are based on the very
n-tier design that you don't agree with. Have you noticed that they work
with almost every DB out there? I wonder why?
It's been 2 years and Mr. Bill is still saying things are going slow.....I wonder why......
You know, there are other tools out there you can use, some are even free...
but, they too have been centered on n-tier development. I suggest you try
J2EE. You'll see the real power of n-Tier design. It's even amazing how
they've put away with proprietary SQL code.
comments inline below
"nhoel" <no*****@nomail.com> wrote in message
news:%2****************@TK2MSFTNGP09.phx.gbl... If you had sold a product to companies, you'll easily find out that the database implementation is not always as you had planned. The same
product can be be deployed in different companies using different RDBMS products (well, if you don't want to, you've just limited yourself to a few
clients). Not to mention those "Mom and Pop" shops that can only afford access, and later on outgrow the system and would demand a robust RDBMS. Well, that's one reason. In some cases (specially tracing and logging), the persistent store is not always a database. It can be a flat file, your email service or logging objects in windows. Wouldn't you rather write a design where
you can just "switch" your application to use any of these objects that logs information differently. I know you're thinking, why in the world would
you choose to log information in a flat file when you have a database or whatever. Depending on who you are... in today's distributed envinronments, technical support can be in far flung India while the
program is running in Iraq. Are you going to send the entire database image to
them so they can analyze a problem that occurred in the last 5 minutes? I can think of so many reasons why you should separate your "Data Layer" from
the other "Layers."
still weak arguments....FLAT FILE...come ON? TO INDIA? EXCUSE me...that's
NO reason.
Backup the database and zip it up and send it over there...what's the
difference?
These points are simply Weird and don't occur in real life.
Mom and Pop on access....just have them install MSDE then?....the amount of
effort to SWITCH is total NONSENSE as you could have just had them switch to
MSDE.......They, the client are happy and so are you.
Getting a Mom and Pop to buy your enterprise app is simply improbable as
they can't afford it....YET, if they could afford it, why would they use
access? I mean come on. Let Microsoft fix their own stuff FIRST before recommending any best practices.
I agree on this one but this does not have any bearing on your question of "3 Tier Architecture."
Also see no reason to have a business logic tier as that can be easily contained in the code behind.
Again, imagine an application where you have to write an interface for
PDA's and full browser like I.E. Coding the business rules for pages that
supply information in a PDA and another for I.E. is just too much to do. I don't know about you, but I would rather spend more time upgrading my system
than doing interfaces (with built in business rules) so many times, depending
on how many client interfaces I have to deal with. Just consider if you have to write a "Touch Tone" interface to your application as well.
I would like to know if anyone BUYS stuff off of a PDA like they do a Web
Site shopping cart.
Can you really shop around and surf with a PDA....just look at how small the
screen is....people have enough trouble with 640x-480 now, I can't imagine
what it would be like for a PDA.
Just any device can display information doesn't mean it's going to have or
need a set of complex business rules in the first place. ANYTHING that
complex, you got to ask yourself if the customer is going to really buy
something OR spend time filling out a form on a PDA...YIKES!
Second, it complete poor design practice to have the same set of business
rules as that would be a single point of failure anyway...and the PDA
business requirements are going to be TOTALLY different then a Web Site's QUESTION OF THE DAY: I would really like to know, who's idea it was to have 3-tier
architecture for .NET Web Pages. All I see is a carry over of practices from DNA
using COM and .asp pages.....
This idea was definitely not from Microsoft. But the idea behind n-tier architecture is code reuse. Unless, you haven't heard of it.
OK, well who was it from then if not Microsoft?
3-tier is using the same OLD broken tools when a new technology, .NET, that has better and more simplier ways of doing things.
N-Tier development is an architectural design that allows a development
team to work together by establishing standards before the first code is ever written. That way, you can avoid the saying, "Too many chefs spoils the broth."
2-Tier can easily do this....Internet and Intranet web sites have hundreds
of web pages where each set of web pages belong to a departments.
With N-Tier, you could like say have one module, but it needs to worked on
two different divisions. Say if it's the database module....what happens if
two people need access to the same time? I say there is some hidden motive (like job security) for making things more complicated and LESS performing when using 3-Tier or N-Tier.
Even in non n-tier designs, Job security has always been a motive. It
isn't even hidden anymore. Don't you think so? I wonder why so many people are still in denial that job security is a goal.
There are a LOT of mind-numb robot MVP's and Microsoft employees who
never question if a particular and well-entrenched way is the best way.......
That's because it's a proven "practice." I don't even call it a design anymore. Code behind leads to too much duplicate code all over the application. But since you prefer to do things the long way... good luck investing more time in writing a simple "logon" screen. By the way, the tools that you get out there to write "components" are based on the very n-tier design that you don't agree with. Have you noticed that they work with almost every DB out there? I wonder why?
The only thing I see proven are the number of failed enterprise projects
like CRM and ERP.
CRM especially as they are always in the news getting sued for a failed
implementation.
ERP...well that takes at least a year of dev, and by that time, the
technology has changed.
IF YOU notice, the POST said WEB, NOT WINDOWS.....Looking at the TOOLS, like
VS.NET are Windows apps that have a really long beta test and are updated
once a year....big difference here. It's been 2 years and Mr. Bill is still saying things are going
slow.....I wonder why......
You know, there are other tools out there you can use, some are even
free... but, they too have been centered on n-tier development. I suggest you try J2EE. You'll see the real power of n-Tier design. It's even amazing how they've put away with proprietary SQL code.
J2EE!!!! Aghhhh....have you looked at their code.....abstraction after
abstraction after abstraction.....YES, you are right, that's N-TIER...where
the "N" stands for "Not enough" Tiers....It's no wonder why J2EE is so slow
and take forever to implement.
Inline...
"nospam" <n@ntspam.com> wrote in message
news:O4**************@TK2MSFTNGP12.phx.gbl... I can only think of one thing that is maybe common, and that's validation.
Other than that, there is nothing else.
There are plenty of services that can be shared beyond simple validation.
Plus, there is a trade off on encapsulation as well and not to mention reliabilty and maintenance.
You are just arguing in favour of keeping business logic in a separate tier
from the UI logic.
Having code HERE and THERE amounts to "spaghetti" code.
Ditto.
Any custom code that is called more than once is also a single point of failure and also each and every web page that uses it must be tested for Quality Assurance. It seems like the time saved in hand coding changes
are easily lost in QA. Then is this myth about the "2 second maintenance change" that never, ever is really 2 seconds.
You've completely lost me here. How would you code an application so that
every single web page _didn't_ need to be tested individually? With your
"single point of failure" couldn't you stop testing when it did fail, and
then fix it before you go and test it again from a different page knowing
that it's just going to fail again?
Here's a real example of why encapsulating business logic in a single
location is a good thing. I worked on a site that had to validate addresses.
Some genius decided that the only valid postal codes were the one's that had
5 digits. Brilliant decision for an international site. When I got to
changing the business logic to allow international postal codes I discovered
(over a period of about 2 weeks tracking down all the places that a postal
code was entered or used) that the team of consultants had put the
validation logic in the database, in the data access layer, in the business
logic layer, in the UI code and in client-side code. I should note that the
address wasn't being validated multiple times because they only used one of
those layers to do the validation depending on where the address was being
entered. Not only was the logic in 5 different locations, they used
different rules (i.e. some allowed anything and long as it wasn't blank,
some numeric only, some specific alpha-numeric patterns and sometimes it
only required it for US addresses).
You say I have a single point of failure, I say you have multiple places to
completely stuff things up.
Maybe you should come back to this discussion after you've got a bit of
experience developing and maintaining a non-trivial web application.
Colin This thread has been closed and replies have been disabled. Please start a new discussion. Similar topics
by: Tomasz Nowak |
last post by:
Hello!
The free serwer I use tells some functions are disabled for security
reasons. How can I retrieve contents of another website, without using
fsockopen and fpsockopen?
|
by: Els |
last post by:
Kris wrote:
<quote>
Liquid layout allows the footer to position itself at the
end of the page, no matter how long or tall the page is.
This is a Good Thing for various reasons that require a...
|
by: DickChristoph |
last post by:
Hi
I am interested in converting a Access 97 application to VB.Net (well okay
rewriting). This would be a VB.Net client with a SQL Server backend, as
opposed to my other alternative which would...
|
by: Bob Jenkins |
last post by:
Its not neccessary to return a pointer in a method/function, right ?
But for what reasons/under what circumstances do we have to do that ?
Thanks
|
by: uli2003wien |
last post by:
Dear group,
we are running a SQL-Server Database which is about 30 GB large. The
purpose of this database is to contain periodic data from automatic
devices which insert values into some tables....
|
by: WebMatrix |
last post by:
Hi,
I developed ASP.NET application in VB.NET about a year ago; it has been in
production since. Now they want to add new features change some of the
existing ones, it's not a major rework, but...
|
by: Jeffrey Todd |
last post by:
I have a heavy investment in an ASP.NET Web application built on 1.1. I have
read about a bunch of new features coming in ASP.NET 2.0, and I don't see
that I need the *new* features. So, what would...
|
by: Sensei |
last post by:
I was having an interesting discussion about the ANSI C and some
``weird inconsistencies'', or at least what at first sight can be seen
as an imbalance. I hope someone can satisfy my curiosity.
...
|
by: gomzi |
last post by:
Am able to parse an xml file properly using an xmlreader (.net). But i encounter ("For security reasons DTD is prohibited in this XML document") an error in case the file isnt an xml one.
i...
|
by: taylorcarr |
last post by:
A Canon printer is a smart device known for being advanced, efficient, and reliable. It is designed for home, office, and hybrid workspace use and can also be used for a variety of purposes. However,...
|
by: aa123db |
last post by:
Variable and constants
Use var or let for variables and const fror constants.
Var foo ='bar';
Let foo ='bar';const baz ='bar';
Functions
function $name$ ($parameters$) {
}
...
|
by: ryjfgjl |
last post by:
If we have dozens or hundreds of excel to import into the database, if we use the excel import function provided by database editors such as navicat, it will be extremely tedious and time-consuming...
|
by: ryjfgjl |
last post by:
In our work, we often receive Excel tables with data in the same format. If we want to analyze these data, it can be difficult to analyze them because the data is spread across multiple Excel files...
|
by: nemocccc |
last post by:
hello, everyone, I want to develop a software for my android phone for daily needs, any suggestions?
|
by: Sonnysonu |
last post by:
This is the data of csv file
1 2 3
1 2 3
1 2 3
1 2 3
2 3
2 3
3
the lengths should be different i have to store the data by column-wise with in the specific length.
suppose the i have to...
|
by: marktang |
last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However,...
|
by: Hystou |
last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can...
|
by: Oralloy |
last post by:
Hello folks,
I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>".
The problem is that using the GNU compilers,...
| |