473,487 Members | 2,622 Online
Bytes | Software Development & Data Engineering Community
Create Post

Home Posts Topics Members FAQ

sql server i/o bottle neck ?

I built a test job that loads data into the database. I get great
performance with Oracle and I'm trying to tune Sql Server to get the
same type of performance.

Based on system monitoring it looks like an I/O issue. So I started
investigating based on sql server perfromance tuning technical
reference.

Here are my system monitor findings:
The system monitor shows during my job:
disk writes per second 3670
disk reads per second 0
avg disk queue is .45
avg disk second read 0
avg disk second write 0

My total i/os per second is 7200. (disk writes per second * 2) + disk
reads. It's times two because I am mirroring.

I have a 6 disk raid array configured Raid 1/0 with 200M of cache
configured read/write. The average disk second read and write of 0
means no i/o latency but my i/os per disk is high compared to
Microsoft's recommendation of 125 i/o per disk. I don't see an i/o
issue based on the numbers, except for the amount of disk writes per
second. The same job ran in Oracle had 187 disk writes per second as
compared to 3600 writes per second in Sql Server. What's up with that?

Here's my calculation:
i/o per disk = [reads + (2 * writes)/number of disks
i/o per disk = [0+(2*3600)/6] = 1200

Microsoft is saying I need way more disks based on their calculation
recommendation.

Total i/o / 125 i/os per disk = disk needed
7200/125= 57.6 - 58 disks
If the disks are showing no latency in the system monitor how can disk
be a bottleneck?
Based on Microsoft's recommendation - I need 58 disks to relieve the
disk bottle.
But I have no i/o latency and the disk queue length is .45.

Ahy recommendations ?

Sep 14 '06 #1
19 3330
The same job ran in Oracle had 187 disk writes per second as
compared to 3600 writes per second in Sql Server. What's up with that?
It's hard to explain the I/O disparity without knowing the details of your
job. My first guess is that you might be committing each SQL Server insert
(default is autocommit) but not each Oracle insert (default is implicit
transactions on). This would require SQL Sever to perform many more
transaction log writes.
If the disks are showing no latency in the system monitor how can disk
be a bottleneck?
An I/O intensive process will generally run at the max speed of your disk
subsystem. Not all I/O is equal; sequential I/O is much more efficient than
random I/O. You can typically do about twice as many sequential I/Os per
second than random I/Os. This is one reason why it's usually best to
perform large data loads using a bulk insert method.

Note that your performance monitor is reporting 3670 writes/sec but,
according to the perf tuning technical reference, your 3 usable disks should
only sustain about 375/sec (3 x 125). It looks to me line the write caching
controller is skewing the numbers. Data must eventually get written to
disk, but since the controller writes data asynchronously, that won't become
a bottleneck until the cache is saturated.

You mention only a single RAID 01 array. Is the transaction log on the same
array as the data files?

--
Hope this helps.

Dan Guzman
SQL Server MVP

"dunleav1" <jm*@dunleavyenterprises.comwrote in message
news:11**********************@p79g2000cwp.googlegr oups.com...
>I built a test job that loads data into the database. I get great
performance with Oracle and I'm trying to tune Sql Server to get the
same type of performance.

Based on system monitoring it looks like an I/O issue. So I started
investigating based on sql server perfromance tuning technical
reference.

Here are my system monitor findings:
The system monitor shows during my job:
disk writes per second 3670
disk reads per second 0
avg disk queue is .45
avg disk second read 0
avg disk second write 0

My total i/os per second is 7200. (disk writes per second * 2) + disk
reads. It's times two because I am mirroring.

I have a 6 disk raid array configured Raid 1/0 with 200M of cache
configured read/write. The average disk second read and write of 0
means no i/o latency but my i/os per disk is high compared to
Microsoft's recommendation of 125 i/o per disk. I don't see an i/o
issue based on the numbers, except for the amount of disk writes per
second. The same job ran in Oracle had 187 disk writes per second as
compared to 3600 writes per second in Sql Server. What's up with that?

Here's my calculation:
i/o per disk = [reads + (2 * writes)/number of disks
i/o per disk = [0+(2*3600)/6] = 1200

Microsoft is saying I need way more disks based on their calculation
recommendation.

Total i/o / 125 i/os per disk = disk needed
7200/125= 57.6 - 58 disks
If the disks are showing no latency in the system monitor how can disk
be a bottleneck?
Based on Microsoft's recommendation - I need 58 disks to relieve the
disk bottle.
But I have no i/o latency and the disk queue length is .45.

Ahy recommendations ?

Sep 15 '06 #2

Dan Guzman wrote:
The same job ran in Oracle had 187 disk writes per second as
compared to 3600 writes per second in Sql Server. What's up with that?

It's hard to explain the I/O disparity without knowing the details of your
job. My first guess is that you might be committing each SQL Server insert
(default is autocommit) but not each Oracle insert (default is implicit
transactions on). This would require SQL Sever to perform many more
transaction log writes.
Each job Oracle Mssql are through jdbc - autocommit is on, sent as a
stored procedure and executed as a batch of 10000.
>
If the disks are showing no latency in the system monitor how can disk
be a bottleneck?

An I/O intensive process will generally run at the max speed of your disk
subsystem. Not all I/O is equal; sequential I/O is much more efficient than
random I/O. You can typically do about twice as many sequential I/Os per
second than random I/Os. This is one reason why it's usually best to
perform large data loads using a bulk insert method.
I'm using a batch as I don't won't to use a log bypassed bulk insert.
This test is ran in simple recovery mode but in production in will be
full recovery model.
>
Note that your performance monitor is reporting 3670 writes/sec but,
according to the perf tuning technical reference, your 3 usable disks should
only sustain about 375/sec (3 x 125). It looks to me line the write caching
controller is skewing the numbers. Data must eventually get written to
disk, but since the controller writes data asynchronously, that won't become
a bottleneck until the cache is saturated.
I am writing the same amount of data in Oracle vs Mssql. But Mssql is
writing a lot more data.
>
You mention only a single RAID 01 array. Is the transaction log on the same
array as the data files?
Yes - I know it's not optimal but Oracle has the same not optimal
configuration and performs a lot faster.
>
--
Hope this helps.

Dan Guzman
SQL Server MVP

"dunleav1" <jm*@dunleavyenterprises.comwrote in message
news:11**********************@p79g2000cwp.googlegr oups.com...
I built a test job that loads data into the database. I get great
performance with Oracle and I'm trying to tune Sql Server to get the
same type of performance.

Based on system monitoring it looks like an I/O issue. So I started
investigating based on sql server perfromance tuning technical
reference.

Here are my system monitor findings:
The system monitor shows during my job:
disk writes per second 3670
disk reads per second 0
avg disk queue is .45
avg disk second read 0
avg disk second write 0

My total i/os per second is 7200. (disk writes per second * 2) + disk
reads. It's times two because I am mirroring.

I have a 6 disk raid array configured Raid 1/0 with 200M of cache
configured read/write. The average disk second read and write of 0
means no i/o latency but my i/os per disk is high compared to
Microsoft's recommendation of 125 i/o per disk. I don't see an i/o
issue based on the numbers, except for the amount of disk writes per
second. The same job ran in Oracle had 187 disk writes per second as
compared to 3600 writes per second in Sql Server. What's up with that?

Here's my calculation:
i/o per disk = [reads + (2 * writes)/number of disks
i/o per disk = [0+(2*3600)/6] = 1200

Microsoft is saying I need way more disks based on their calculation
recommendation.

Total i/o / 125 i/os per disk = disk needed
7200/125= 57.6 - 58 disks
If the disks are showing no latency in the system monitor how can disk
be a bottleneck?
Based on Microsoft's recommendation - I need 58 disks to relieve the
disk bottle.
But I have no i/o latency and the disk queue length is .45.

Ahy recommendations ?
Sep 15 '06 #3
It's hard to explain the I/O disparity without knowing the details of your
job. My first guess is that you might be committing each SQL Server insert
(default is autocommit) but not each Oracle insert (default is implicit
transactions on). This would require SQL Sever to perform many more
transaction log writes.
Can you explain this paragraph a little more. I'm not a java guy.
Thanks for your help.

Sep 15 '06 #4

dunleav1 wrote:
It's hard to explain the I/O disparity without knowing the details of your
job. My first guess is that you might be committing each SQL Server insert
(default is autocommit) but not each Oracle insert (default is implicit
transactions on). This would require SQL Sever to perform many more
transaction log writes.
Can you explain this paragraph a little more. I'm not a java guy.
Thanks for your help.
Looking at the java code- autocommit is set to false using the
setAutoCommit class.

Sep 15 '06 #5
dunleav1 (jm*@dunleavyenterprises.com) writes:
dunleav1 wrote:
It's hard to explain the I/O disparity without knowing the details of
your job. My first guess is that you might be committing each SQL
Server insert (default is autocommit) but not each Oracle insert
(default is implicit transactions on). This would require SQL Sever
to perform many more transaction log writes.
Can you explain this paragraph a little more. I'm not a java guy.
Thanks for your help.

Looking at the java code- autocommit is set to false using the
setAutoCommit class.
First of all, if you want more precise answer, it helps if you post
your code. At the moment, we are only guessing.

On SQL Server the default behaviour is that each statement is a separate
transaction. If you want all to be one transaction, you have to start
an explicit transaction with BEGIN TRANSACTION. Or you can use SET
IMPLICIT_TRANSACTIONS ON, which means that a DML statement will start
a transaction that you will need to explicit commit later.

Maybe that happens with your Java thing when you set AutoCommit to false -
I have no experience with Java programming or APIs for Java. But
personally I feel that no matter the engine, explicit transactions
are better than relying on implicit transactions.
--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pro...ads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinf...ons/books.mspx
Sep 15 '06 #6
I'm using a batch as I don't won't to use a log bypassed bulk insert.
This test is ran in simple recovery mode but in production in will be
full recovery model.
You can use a fully-logged bulk insert by changing the recovery model to
FULL. Bulk insert is the fastest way to load data into SQL Server.
I am writing the same amount of data in Oracle vs Mssql. But Mssql is
writing a lot more data.
snip <
Yes - I know it's not optimal but Oracle has the same not optimal
configuration and performs a lot faster.
Both these issues may be because of transaction differences. If each insert
is in a single transaction in the SQL Server job but not in the Oracle job,
many more transaction log writes will be required by SQL Server. This will
increase write I/O and compound the negative effect of having data and log
on the same array.

--
Hope this helps.

Dan Guzman
SQL Server MVP

"dunleav1" <jm*@dunleavyenterprises.comwrote in message
news:11********************@d34g2000cwd.googlegrou ps.com...
>
Dan Guzman wrote:
The same job ran in Oracle had 187 disk writes per second as
compared to 3600 writes per second in Sql Server. What's up with that?

It's hard to explain the I/O disparity without knowing the details of
your
job. My first guess is that you might be committing each SQL Server
insert
(default is autocommit) but not each Oracle insert (default is implicit
transactions on). This would require SQL Sever to perform many more
transaction log writes.
Each job Oracle Mssql are through jdbc - autocommit is on, sent as a
stored procedure and executed as a batch of 10000.
>>
If the disks are showing no latency in the system monitor how can disk
be a bottleneck?

An I/O intensive process will generally run at the max speed of your disk
subsystem. Not all I/O is equal; sequential I/O is much more efficient
than
random I/O. You can typically do about twice as many sequential I/Os per
second than random I/Os. This is one reason why it's usually best to
perform large data loads using a bulk insert method.
I'm using a batch as I don't won't to use a log bypassed bulk insert.
This test is ran in simple recovery mode but in production in will be
full recovery model.
>>
Note that your performance monitor is reporting 3670 writes/sec but,
according to the perf tuning technical reference, your 3 usable disks
should
only sustain about 375/sec (3 x 125). It looks to me line the write
caching
controller is skewing the numbers. Data must eventually get written to
disk, but since the controller writes data asynchronously, that won't
become
a bottleneck until the cache is saturated.
I am writing the same amount of data in Oracle vs Mssql. But Mssql is
writing a lot more data.
>>
You mention only a single RAID 01 array. Is the transaction log on the
same
array as the data files?
Yes - I know it's not optimal but Oracle has the same not optimal
configuration and performs a lot faster.
>>
--
Hope this helps.

Dan Guzman
SQL Server MVP

"dunleav1" <jm*@dunleavyenterprises.comwrote in message
news:11**********************@p79g2000cwp.googleg roups.com...
>I built a test job that loads data into the database. I get great
performance with Oracle and I'm trying to tune Sql Server to get the
same type of performance.

Based on system monitoring it looks like an I/O issue. So I started
investigating based on sql server perfromance tuning technical
reference.

Here are my system monitor findings:
The system monitor shows during my job:
disk writes per second 3670
disk reads per second 0
avg disk queue is .45
avg disk second read 0
avg disk second write 0

My total i/os per second is 7200. (disk writes per second * 2) + disk
reads. It's times two because I am mirroring.

I have a 6 disk raid array configured Raid 1/0 with 200M of cache
configured read/write. The average disk second read and write of 0
means no i/o latency but my i/os per disk is high compared to
Microsoft's recommendation of 125 i/o per disk. I don't see an i/o
issue based on the numbers, except for the amount of disk writes per
second. The same job ran in Oracle had 187 disk writes per second as
compared to 3600 writes per second in Sql Server. What's up with that?

Here's my calculation:
i/o per disk = [reads + (2 * writes)/number of disks
i/o per disk = [0+(2*3600)/6] = 1200

Microsoft is saying I need way more disks based on their calculation
recommendation.

Total i/o / 125 i/os per disk = disk needed
7200/125= 57.6 - 58 disks
If the disks are showing no latency in the system monitor how can disk
be a bottleneck?
Based on Microsoft's recommendation - I need 58 disks to relieve the
disk bottle.
But I have no i/o latency and the disk queue length is .45.

Ahy recommendations ?

Sep 16 '06 #7
First issue:
I wrote a tsql transaction to insert 200000 records at a time and it
was approx. 20% slower than Oracle. I have Oracle configured with a
large log cache and large redolog - so lot happens in the cache before
a write. Is there any way to configure the log cache to be larger in
sql server or do I just keep giving the entire engine memory? I tried
altered the recovery interval hoping it was part of the computation for
the size of log cache within the engine but with no real improvement.

Second issue - It looks like mssql doesn't handle log/data file
contention as well as Oracle if the log and data files are on the same
array. I know it's not the preferred configuration but I'm doing with
the hardware i have. I don't see any latency on the raid array but the
numbers are lower than Oracle.

Third issue - Sql server does about 50% more context swithing that
Oracle. Even with using pooling instead of threading.

Fourth issue - When going through jdbc - either the jdbc driver or the
engine jdbc/java api is a lot slower than Oracle.

Sep 16 '06 #8

Two more things

What is considered to be a high number of stolen pages?
I am geting about 10000 on average during a test run.

I have configured the sql server to have 2048M of memory but the task
manager only shows 1700M allocated to sql sever. If sql server needs
more memory because of stolen pages why isn't sql server grabbing more
than 1700M of memory.

The server has 4G of RAM and a 4G swap file.

Sep 16 '06 #9
dunleav1 (jm*@dunleavyenterprises.com) writes:
What is considered to be a high number of stolen pages?
I am geting about 10000 on average during a test run.

I have configured the sql server to have 2048M of memory but the task
manager only shows 1700M allocated to sql sever. If sql server needs
more memory because of stolen pages why isn't sql server grabbing more
than 1700M of memory.

The server has 4G of RAM and a 4G swap file.
Which version and edition of SQL Server are you using?

Have you set the /3GB switch in Boot.ini for Windows?

--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pro...ads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinf...ons/books.mspx
Sep 16 '06 #10
dunleav1 (jm*@dunleavyenterprises.com) writes:
I wrote a tsql transaction to insert 200000 records at a time and it
was approx. 20% slower than Oracle. I have Oracle configured with a
large log cache and large redolog - so lot happens in the cache before
a write. Is there any way to configure the log cache to be larger in
sql server or do I just keep giving the entire engine memory? I tried
altered the recovery interval hoping it was part of the computation for
the size of log cache within the engine but with no real improvement.
So how do this transaction look like? Do you send one INSERT statement
at the time from the client? Is it one big batch of INSERT VALUES
statements?

The fastest way to load a lot of data is by bulk insert. I don't know
if bulk operations are exponsed in the Java APIs, but you can always
use command-line BCP.

If you don't want to deal with bulk-load, you could consider composing an
XML document of the data (or a couple, to insert batchwise) which you can
unpack in the server and insert in a single statement. This is slower rhan

If you don't want to deal with XML, but run plain statements, the fastest
is this way:

INSERT tbl(col1, col2, col3, ....)
EXEC('SELECT val1a, val2a, val3a
SELECT val1b, val2b, val3b
....')

What you should not try is:

INSERT tbl (col1, col2, col3, ...)
SELECT val1a, val2a, val3a
UNION ALL
SELCET val1b, val2b, val3b
UNION ALL
...

For moderate daa sizes this is faster than many INSERT statements, but
as the number of rows increase, this becomes disastrously slow. You get
faster disk I/O - the bottleneck is in the compilation of the statement.

I'm aware of that the alternatives I have suggested are not portable,
but if you need to insert that many rows, what's in standard SQL is simply
not a good idea, at least not SQL Server.
Fourth issue - When going through jdbc - either the jdbc driver or the
engine jdbc/java api is a lot slower than Oracle.
I don't know which JDBC driver you use, but I believe that Microsoft's
free driver is a fairly lightweight thing and far the best on the
market. The JDBC driver for SQL 2000 was based on 2.0 of DataDirect's
driver, and they have newer versions out.
--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pro...ads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinf...ons/books.mspx
Sep 16 '06 #11
One more thing: how are the INSERT organized with regards to indexes
on the table, specifically the clustered index? If the data comes in an
order that does not agree with the clustered index, you can get a lot of
page splits and overhead.

--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pro...ads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinf...ons/books.mspx
Sep 16 '06 #12
I just have a loop that increments a counter for my inserts. The
counter is used for the clustered key column.

Erland Sommarskog wrote:
One more thing: how are the INSERT organized with regards to indexes
on the table, specifically the clustered index? If the data comes in an
order that does not agree with the clustered index, you can get a lot of
page splits and overhead.

--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pro...ads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinf...ons/books.mspx
Sep 17 '06 #13
I can't use bulk load in my production code.
I'm creating test code to create load to tune my product.

Here's the test code I have been running to tune i/o on mssql.
This is ran from tsql and not java. Running this code locally is about
20% slower than loading Oracle via jdbc, but is a lot faster than
through jdbc.

begin transaction
declare @c int
set @c = 0
while @c <500000
begin
set @c = @c + 1
insert into insertestjd calues (@c,@c,@c,@c)
end
commit transaction

The Microsoft driver (even 1.1.1560) is slow compared to other 3rd
party jdbc drivers for Microsoft.

Sep 17 '06 #14
I am using sql server 2005 sp1 on windows as 2003 SP1.
>
Which version and edition of SQL Server are you using?

Have you set the /3GB switch in Boot.ini for Windows?

No, will that allow sql server to utilize more memory?

Sep 17 '06 #15
dunleav1 (jm*@dunleavyenterprises.com) writes:
I am using sql server 2005 sp1 on windows as 2003 SP1.
>>
Which version and edition of SQL Server are you using?

Have you set the /3GB switch in Boot.ini for Windows?

No, will that allow sql server to utilize more memory?
Yes. Without it, the OS gets all the memory above 2GB.
--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pro...ads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinf...ons/books.mspx
Sep 17 '06 #16
dunleav1 (jm*@dunleavyenterprises.com) writes:
I can't use bulk load in my production code.
I'm creating test code to create load to tune my product.

Here's the test code I have been running to tune i/o on mssql.
This is ran from tsql and not java. Running this code locally is about
20% slower than loading Oracle via jdbc, but is a lot faster than
through jdbc.

begin transaction
declare @c int
set @c = 0
while @c <500000
begin
set @c = @c + 1
insert into insertestjd calues (@c,@c,@c,@c)
end
commit transaction
Do you run the above with SET NOCOUNT ON? If you don't, there is a
(1 row affected) being sent to the client for every row inserted.

When I added SET NOCOUNT ON, your script ran in 10 seconds from Mgmt
Studio. I did not want to run it with SET NOCOUNT OFF.

I also tried this:

insert into inserttest
select Number, Number, Number, Number
from listtest..Numbers
where Number < 500000
go

Where Numbers is a one-column with numbers from 1 to 999999 that I had
around. This ran in six seconds.
--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pro...ads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinf...ons/books.mspx
Sep 17 '06 #17
I'm running my test locally so the execution time is the same whether
nocount is set or not. Thanks.

Erland Sommarskog wrote:
dunleav1 (jm*@dunleavyenterprises.com) writes:
I can't use bulk load in my production code.
I'm creating test code to create load to tune my product.

Here's the test code I have been running to tune i/o on mssql.
This is ran from tsql and not java. Running this code locally is about
20% slower than loading Oracle via jdbc, but is a lot faster than
through jdbc.

begin transaction
declare @c int
set @c = 0
while @c <500000
begin
set @c = @c + 1
insert into insertestjd calues (@c,@c,@c,@c)
end
commit transaction

Do you run the above with SET NOCOUNT ON? If you don't, there is a
(1 row affected) being sent to the client for every row inserted.

When I added SET NOCOUNT ON, your script ran in 10 seconds from Mgmt
Studio. I did not want to run it with SET NOCOUNT OFF.

I also tried this:

insert into inserttest
select Number, Number, Number, Number
from listtest..Numbers
where Number < 500000
go

Where Numbers is a one-column with numbers from 1 to 999999 that I had
around. This ran in six seconds.
--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pro...ads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinf...ons/books.mspx
Sep 18 '06 #18
On 18 Sep 2006 07:50:18 -0700, "dunleav1"
<jm*@dunleavyenterprises.comwrote:
>I'm running my test locally so the execution time is the same whether
nocount is set or not. Thanks.
Did you determine this by running it both ways? I would be surprised
if there was no impact at all, even running locally. Half a million
of anything has to take time.

Roy Harvey
Beacon Falls, CT
Sep 18 '06 #19
dunleav1 (jm*@dunleavyenterprises.com) writes:
I'm running my test locally so the execution time is the same whether
nocount is set or not. Thanks.
Did you actually try it?

When I ran your script with SET NOCOUNT in effect from a Perl script it took
14 seconds with an SQL 2005 instance on the same machine. When NOCOUNT was
OFF it took 43 seconds.

In your case I would expect even greater difference, since you appear to
be using Microsoft's slow JDBC driver.

--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pro...ads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinf...ons/books.mspx
Sep 18 '06 #20

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

1
1669
by: SprMan | last post by:
I run a Toaster Mail server with virtual domains. The users authenticate against a MySql 3..23.58 db. We have been having an ongoing issue with login's just "hanging" via both http (webmail) and...
27
3069
by: Oracle Newbie | last post by:
I got a new Job where I have to handle Oracle databases on Linux. My previous experience is with Sql Server on Windows NT. I want to know how difficult will this shift be and what do I need to...
5
1511
by: Mr. x | last post by:
Hello, I am using .net. I need to transfer table data of an html (client) to the server. I don't know if best thing to do is (and correct me if I'm wrong) : 1) translate the table to xml code...
10
1893
by: Christine Desmuke | last post by:
Hello: At the risk of starting a flame-war, I'd like some more details on the use of Gentoo Linux for a production PostgreSQL server. There have been a couple of comments lately that it is not...
1
1712
by: Mike Hofer | last post by:
I really need some help, and I'd appreciate any that you folks can provide. The ASP.NET application in question uses version 1.1 of the .NET Framework. All of the pages use a common base class...
1
1233
by: Radesh | last post by:
Hi Guys. I have to write and display the following song using an appropriate loop. 10 green bottles Hanging on the wall And if 1 green bottle Should accidentally fall There'll be 9 green...
9
1810
eboyjr14
by: eboyjr14 | last post by:
Let's say you have a wine glass... with a dime inside. There is a cork on the bottle: .` oNMMdyooo- ...
0
918
by: delp84921meekins | last post by:
bottle stress crack resistance tester http://cracks.00bp.com F R E E
15
2689
by: ingejg | last post by:
I am starting to study internet synchronization, and my head is still spinning since internet is not my forte, however my boss is breathing down my neck at the moment. Our company has only one...
0
7106
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However,...
0
6967
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can...
1
6846
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows...
0
7349
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each...
1
4874
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome a new...
0
4565
by: conductexam | last post by:
I have .net C# application in which I am extracting data from word file and save it in database particularly. To store word all data as it is I am converting the whole word file firstly in HTML and...
0
3071
by: adsilva | last post by:
A Windows Forms form does not have the event Unload, like VB6. What one acts like?
1
600
muto222
by: muto222 | last post by:
How can i add a mobile payment intergratation into php mysql website.
0
267
bsmnconsultancy
by: bsmnconsultancy | last post by:
In today's digital era, a well-designed website is crucial for businesses looking to succeed. Whether you're a small business owner or a large corporation in Toronto, having a strong online presence...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.