473,700 Members | 2,839 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

SQL Server storing large amounts of data in multiple tables

Hello,
Currently we have a database, and it is our desire for it to be able
to store millions of records. The data in the table can be divided up
by client, and it stores nothing but about 7 integers.
| table |
| id | clientId | int1 | int2 | int 3 | ... |

Right now, our benchmarks indicate a drastic increase in performance
if we divide the data into different tables. For example,
table_clientA, table_clientB, table_clientC, despite the fact the
tables contain the exact same columns. This however does not seem very
clean or elegant to me, and rather illogical since a database exists
as a single file on the harddrive.

| table_clientA |
| id | clientId | int1 | int2 | int 3 | ...

| table_clientB |
| id | clientId | int1 | int2 | int 3 | ...

| table_clientC |
| id | clientId | int1 | int2 | int 3 | ...

Is there anyway to duplicate this increase in database performance
gained by splitting the table, perhaps by using a certain type of
index?

Thanks,
Jeff Brubaker
Software Developer
Jul 20 '05 #1
4 4141
Why not create a view that will combine the separate tables back into your
original format. You could even place the different base tables in different
locations (i.e., different servers) - the whole distributed partitioned view
concept.

Other than this, what do you have indexed on this table? What kind of query
is showing a drastic improvement when you separate the table like this?

-Chuck Urwiler, MCSD, MCDBA

"jeff brubaker" <je**@priva.com > wrote in message
news:b7******** *************** **@posting.goog le.com...
Hello,
Currently we have a database, and it is our desire for it to be able
to store millions of records. The data in the table can be divided up
by client, and it stores nothing but about 7 integers.
| table |
| id | clientId | int1 | int2 | int 3 | ... |

Right now, our benchmarks indicate a drastic increase in performance
if we divide the data into different tables. For example,
table_clientA, table_clientB, table_clientC, despite the fact the
tables contain the exact same columns. This however does not seem very
clean or elegant to me, and rather illogical since a database exists
as a single file on the harddrive.

| table_clientA |
| id | clientId | int1 | int2 | int 3 | ...

| table_clientB |
| id | clientId | int1 | int2 | int 3 | ...

| table_clientC |
| id | clientId | int1 | int2 | int 3 | ...

Is there anyway to duplicate this increase in database performance
gained by splitting the table, perhaps by using a certain type of
index?

Thanks,
Jeff Brubaker
Software Developer

Jul 20 '05 #2
[posted and mailed, please reply in news]

jeff brubaker (je**@priva.com ) writes:
Currently we have a database, and it is our desire for it to be able
to store millions of records. The data in the table can be divided up
by client, and it stores nothing but about 7 integers.
| table |
| id | clientId | int1 | int2 | int 3 | ... |

Right now, our benchmarks indicate a drastic increase in performance
if we divide the data into different tables. For example,
table_clientA, table_clientB, table_clientC, despite the fact the
tables contain the exact same columns. This however does not seem very
clean or elegant to me, and rather illogical since a database exists
as a single file on the harddrive.
...

Is there anyway to duplicate this increase in database performance
gained by splitting the table, perhaps by using a certain type of
index?


It is not implausible, but with out further knowledge of your tables
and the benchmark queries, it is impossible to tell.

You could get a more informative answer, if you posted:

o The CREATE TABLE statements (both for the unpartitioned table,
and the partitioned table).
o Any indexes on the tables.
o The queries you use for the benchmark.
o If you have scripts that generates data for the benchmarks, that
would extremely useful. (Provided that they reasonably small.)

Which client did you use for the benchmark? Query Analyzer?

--
Erland Sommarskog, SQL Server MVP, so****@algonet. se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techinf...2000/books.asp
Jul 20 '05 #3
Erland Sommarskog <so****@algonet .se> wrote in message news:<Xn******* *************** @127.0.0.1>...
[posted and mailed, please reply in news]

jeff brubaker (je**@priva.com ) writes:
Currently we have a database, and it is our desire for it to be able
to store millions of records. The data in the table can be divided up
by client, and it stores nothing but about 7 integers.
| table |
| id | clientId | int1 | int2 | int 3 | ... |

Right now, our benchmarks indicate a drastic increase in performance
if we divide the data into different tables. For example,
table_clientA, table_clientB, table_clientC, despite the fact the
tables contain the exact same columns. This however does not seem very
clean or elegant to me, and rather illogical since a database exists
as a single file on the harddrive.
...

Is there anyway to duplicate this increase in database performance
gained by splitting the table, perhaps by using a certain type of
index?


It is not implausible, but with out further knowledge of your tables
and the benchmark queries, it is impossible to tell.

You could get a more informative answer, if you posted:

o The CREATE TABLE statements (both for the unpartitioned table,
and the partitioned table).
o Any indexes on the tables.
o The queries you use for the benchmark.
o If you have scripts that generates data for the benchmarks, that
would extremely useful. (Provided that they reasonably small.)

Which client did you use for the benchmark? Query Analyzer?

Okay, sorry for the delay. Here is a SQL Script to setup the
experiment. Basically it builts one table with 100,000 records, and 10
small tables with 10,000 records. To select all the records from
table_5 is substantially faster than selecting all the records from
bigTable where clientID = 5

SET NOCOUNT ON
/* Drop any tables that might exist */

if exists (select * from dbo.sysobjects where id =
object_id(N'[dbo].[bigTable]') and OBJECTPROPERTY( id, N'IsUserTable')
= 1)
drop table [dbo].[bigTable]
if exists (select * from dbo.sysobjects where id =
object_id(N'[dbo].[table_0]') and OBJECTPROPERTY( id, N'IsUserTable') =
1)
drop table [dbo].[table_0]
if exists (select * from dbo.sysobjects where id =
object_id(N'[dbo].[table_1]') and OBJECTPROPERTY( id, N'IsUserTable') =
1)
drop table [dbo].[table_1]
if exists (select * from dbo.sysobjects where id =
object_id(N'[dbo].[table_2]') and OBJECTPROPERTY( id, N'IsUserTable') =
1)
drop table [dbo].[table_2]
if exists (select * from dbo.sysobjects where id =
object_id(N'[dbo].[table_3]') and OBJECTPROPERTY( id, N'IsUserTable') =
1)
drop table [dbo].[table_3]
if exists (select * from dbo.sysobjects where id =
object_id(N'[dbo].[table_4]') and OBJECTPROPERTY( id, N'IsUserTable') =
1)
drop table [dbo].[table_4]
if exists (select * from dbo.sysobjects where id =
object_id(N'[dbo].[table_5]') and OBJECTPROPERTY( id, N'IsUserTable') =
1)
drop table [dbo].[table_5]
if exists (select * from dbo.sysobjects where id =
object_id(N'[dbo].[table_6]') and OBJECTPROPERTY( id, N'IsUserTable') =
1)
drop table [dbo].[table_6]
if exists (select * from dbo.sysobjects where id =
object_id(N'[dbo].[table_7]') and OBJECTPROPERTY( id, N'IsUserTable') =
1)
drop table [dbo].[table_7]
if exists (select * from dbo.sysobjects where id =
object_id(N'[dbo].[table_8]') and OBJECTPROPERTY( id, N'IsUserTable') =
1)
drop table [dbo].[table_8]
if exists (select * from dbo.sysobjects where id =
object_id(N'[dbo].[table_9]') and OBJECTPROPERTY( id, N'IsUserTable') =
1)
drop table [dbo].[table_9]
/* Create the tables */

CREATE TABLE [dbo].[bigTable] (
[id1] [int] NULL ,
[id2] [int] NULL ,
[id3] [int] NULL ,
[id4] [int] NULL ,
[id5] [int] NULL ,
[id6] [int] NULL ,
[id7] [int] NULL ,
[id8] [int] NULL ,
[id9] [int] NULL ,
[id10] [int] NULL ,
[id11] [int] NULL ,
[id12] [int] NULL ,
[id13] [int] NULL ,
[id14] [int] NULL ,
[id15] [int] NULL ,
[id16] [int] NULL ,
[id17] [int] NULL ,
[id18] [int] NULL ,
[id19] [int] NULL ,
[id110] [int] NULL ,
[clientid] [int] NULL
) ON [PRIMARY]

CREATE TABLE table_1 (
[id1] [int] NULL ,
[id2] [int] NULL ,
[id3] [int] NULL ,
[id4] [int] NULL ,
[id5] [int] NULL ,
[id6] [int] NULL ,
[id7] [int] NULL ,
[id8] [int] NULL ,
[id9] [int] NULL ,
[id10] [int] NULL ,
[id11] [int] NULL ,
[id12] [int] NULL ,
[id13] [int] NULL ,
[id14] [int] NULL ,
[id15] [int] NULL ,
[id16] [int] NULL ,
[id17] [int] NULL ,
[id18] [int] NULL ,
[id19] [int] NULL ,
[id20] [int] NULL ,
) ON [PRIMARY]

CREATE TABLE table_2 (
[id1] [int] NULL ,
[id2] [int] NULL ,
[id3] [int] NULL ,
[id4] [int] NULL ,
[id5] [int] NULL ,
[id6] [int] NULL ,
[id7] [int] NULL ,
[id8] [int] NULL ,
[id9] [int] NULL ,
[id10] [int] NULL ,
[id11] [int] NULL ,
[id12] [int] NULL ,
[id13] [int] NULL ,
[id14] [int] NULL ,
[id15] [int] NULL ,
[id16] [int] NULL ,
[id17] [int] NULL ,
[id18] [int] NULL ,
[id19] [int] NULL ,
[id20] [int] NULL ,
) ON [PRIMARY]
CREATE TABLE table_3 (
[id1] [int] NULL ,
[id2] [int] NULL ,
[id3] [int] NULL ,
[id4] [int] NULL ,
[id5] [int] NULL ,
[id6] [int] NULL ,
[id7] [int] NULL ,
[id8] [int] NULL ,
[id9] [int] NULL ,
[id10] [int] NULL ,
[id11] [int] NULL ,
[id12] [int] NULL ,
[id13] [int] NULL ,
[id14] [int] NULL ,
[id15] [int] NULL ,
[id16] [int] NULL ,
[id17] [int] NULL ,
[id18] [int] NULL ,
[id19] [int] NULL ,
[id20] [int] NULL ,
) ON [PRIMARY]
CREATE TABLE table_4 (
[id1] [int] NULL ,
[id2] [int] NULL ,
[id3] [int] NULL ,
[id4] [int] NULL ,
[id5] [int] NULL ,
[id6] [int] NULL ,
[id7] [int] NULL ,
[id8] [int] NULL ,
[id9] [int] NULL ,
[id10] [int] NULL ,
[id11] [int] NULL ,
[id12] [int] NULL ,
[id13] [int] NULL ,
[id14] [int] NULL ,
[id15] [int] NULL ,
[id16] [int] NULL ,
[id17] [int] NULL ,
[id18] [int] NULL ,
[id19] [int] NULL ,
[id20] [int] NULL ,
) ON [PRIMARY]
CREATE TABLE table_5 (
[id1] [int] NULL ,
[id2] [int] NULL ,
[id3] [int] NULL ,
[id4] [int] NULL ,
[id5] [int] NULL ,
[id6] [int] NULL ,
[id7] [int] NULL ,
[id8] [int] NULL ,
[id9] [int] NULL ,
[id10] [int] NULL ,
[id11] [int] NULL ,
[id12] [int] NULL ,
[id13] [int] NULL ,
[id14] [int] NULL ,
[id15] [int] NULL ,
[id16] [int] NULL ,
[id17] [int] NULL ,
[id18] [int] NULL ,
[id19] [int] NULL ,
[id20] [int] NULL ,
) ON [PRIMARY]
CREATE TABLE table_6 (
[id1] [int] NULL ,
[id2] [int] NULL ,
[id3] [int] NULL ,
[id4] [int] NULL ,
[id5] [int] NULL ,
[id6] [int] NULL ,
[id7] [int] NULL ,
[id8] [int] NULL ,
[id9] [int] NULL ,
[id10] [int] NULL ,
[id11] [int] NULL ,
[id12] [int] NULL ,
[id13] [int] NULL ,
[id14] [int] NULL ,
[id15] [int] NULL ,
[id16] [int] NULL ,
[id17] [int] NULL ,
[id18] [int] NULL ,
[id19] [int] NULL ,
[id20] [int] NULL ,
) ON [PRIMARY]
CREATE TABLE table_7 (
[id1] [int] NULL ,
[id2] [int] NULL ,
[id3] [int] NULL ,
[id4] [int] NULL ,
[id5] [int] NULL ,
[id6] [int] NULL ,
[id7] [int] NULL ,
[id8] [int] NULL ,
[id9] [int] NULL ,
[id10] [int] NULL ,
[id11] [int] NULL ,
[id12] [int] NULL ,
[id13] [int] NULL ,
[id14] [int] NULL ,
[id15] [int] NULL ,
[id16] [int] NULL ,
[id17] [int] NULL ,
[id18] [int] NULL ,
[id19] [int] NULL ,
[id20] [int] NULL ,
) ON [PRIMARY]
CREATE TABLE table_8 (
[id1] [int] NULL ,
[id2] [int] NULL ,
[id3] [int] NULL ,
[id4] [int] NULL ,
[id5] [int] NULL ,
[id6] [int] NULL ,
[id7] [int] NULL ,
[id8] [int] NULL ,
[id9] [int] NULL ,
[id10] [int] NULL ,
[id11] [int] NULL ,
[id12] [int] NULL ,
[id13] [int] NULL ,
[id14] [int] NULL ,
[id15] [int] NULL ,
[id16] [int] NULL ,
[id17] [int] NULL ,
[id18] [int] NULL ,
[id19] [int] NULL ,
[id20] [int] NULL ,
) ON [PRIMARY]
CREATE TABLE table_9 (
[id1] [int] NULL ,
[id2] [int] NULL ,
[id3] [int] NULL ,
[id4] [int] NULL ,
[id5] [int] NULL ,
[id6] [int] NULL ,
[id7] [int] NULL ,
[id8] [int] NULL ,
[id9] [int] NULL ,
[id10] [int] NULL ,
[id11] [int] NULL ,
[id12] [int] NULL ,
[id13] [int] NULL ,
[id14] [int] NULL ,
[id15] [int] NULL ,
[id16] [int] NULL ,
[id17] [int] NULL ,
[id18] [int] NULL ,
[id19] [int] NULL ,
[id20] [int] NULL ,
) ON [PRIMARY]
CREATE TABLE table_0 (
[id1] [int] NULL ,
[id2] [int] NULL ,
[id3] [int] NULL ,
[id4] [int] NULL ,
[id5] [int] NULL ,
[id6] [int] NULL ,
[id7] [int] NULL ,
[id8] [int] NULL ,
[id9] [int] NULL ,
[id10] [int] NULL ,
[id11] [int] NULL ,
[id12] [int] NULL ,
[id13] [int] NULL ,
[id14] [int] NULL ,
[id15] [int] NULL ,
[id16] [int] NULL ,
[id17] [int] NULL ,
[id18] [int] NULL ,
[id19] [int] NULL ,
[id20] [int] NULL ,
) ON [PRIMARY]
DECLARE @countPerClient int
SET @countPerClient = 10000
DECLARE @counter int
SET @counter = 1

/* Fill the big table with the 10 clients */

WHILE (@counter <= @countPerClient )
BEGIN
INSERT bigTable (clientId) VALUES (0)
SET @counter = @counter + 1
END
SET @counter=1
WHILE (@counter <= @countPerClient )
BEGIN
INSERT bigTable (clientId) VALUES (1)
SET @counter = @counter + 1
END
SET @counter=1
WHILE (@counter <= @countPerClient )
BEGIN
INSERT bigTable (clientId) VALUES (2)
SET @counter = @counter + 1
END
SET @counter=1
WHILE (@counter <= @countPerClient )
BEGIN
INSERT bigTable (clientId) VALUES (3)
SET @counter = @counter + 1
END
SET @counter=1
WHILE (@counter <= @countPerClient )
BEGIN
INSERT bigTable (clientId) VALUES (4)
SET @counter = @counter + 1
END
SET @counter=1
WHILE (@counter <= @countPerClient )
BEGIN
INSERT bigTable (clientId) VALUES (5)
SET @counter = @counter + 1
END
SET @counter=1
WHILE (@counter <= @countPerClient )
BEGIN
INSERT bigTable (clientId) VALUES (6)
SET @counter = @counter + 1
END
SET @counter=1
WHILE (@counter <= @countPerClient )
BEGIN
INSERT bigTable (clientId) VALUES (7)
SET @counter = @counter + 1
END
SET @counter=1
WHILE (@counter <= @countPerClient )
BEGIN
INSERT bigTable (clientId) VALUES (8)
SET @counter = @counter + 1
END
SET @counter=1
WHILE (@counter <= @countPerClient )
BEGIN
INSERT bigTable (clientId) VALUES (9)
SET @counter = @counter + 1
END
/* Fill each of the table with 1 clients */
SET @counter = 1
WHILE (@counter <= @countPerClient )
BEGIN
INSERT table_1 DEFAULT VALUES
SET @counter = @counter + 1
END

SET @counter = 1
WHILE (@counter <= @countPerClient )
BEGIN
INSERT table_2 DEFAULT VALUES
SET @counter = @counter + 1
END
SET @counter = 1
WHILE (@counter <= @countPerClient )
BEGIN
INSERT table_3 DEFAULT VALUES
SET @counter = @counter + 1
END
SET @counter = 1
WHILE (@counter <= @countPerClient )
BEGIN
INSERT table_4 DEFAULT VALUES
SET @counter = @counter + 1
END
SET @counter = 1
WHILE (@counter <= @countPerClient )
BEGIN
INSERT table_5 DEFAULT VALUES
SET @counter = @counter + 1
END
SET @counter = 1
WHILE (@counter <= @countPerClient )
BEGIN
INSERT table_6 DEFAULT VALUES
SET @counter = @counter + 1
END
SET @counter = 1
WHILE (@counter <= @countPerClient )
BEGIN
INSERT table_7 DEFAULT VALUES
SET @counter = @counter + 1
END
SET @counter = 1
WHILE (@counter <= @countPerClient )
BEGIN
INSERT table_8 DEFAULT VALUES
SET @counter = @counter + 1
END

SET @counter = 1
WHILE (@counter <= @countPerClient )
BEGIN
INSERT table_9 DEFAULT VALUES
SET @counter = @counter + 1
END
SET @counter = 1
WHILE (@counter <= @countPerClient )
BEGIN
INSERT table_0 DEFAULT VALUES
SET @counter = @counter + 1
END
GO
/* Now time for the queries */
DECLARE @x datetime
SELECT @x = GetDate()
select count(*) from table_5
SELECT 'Split Tables' as label,DateDiff( millisecond, @x, GetDate())

SELECT @x = GetDate()
select count(*) from bigTable where clientId=5
SELECT 'Big Table' as label,DateDiff( millisecond, @x, GetDate())
Jul 20 '05 #4
[posted and mailed, please reply in news]

jeff brubaker (je**@priva.com ) writes:
Okay, sorry for the delay. Here is a SQL Script to setup the
experiment. Basically it builts one table with 100,000 records, and 10
small tables with 10,000 records. To select all the records from
table_5 is substantially faster than selecting all the records from
bigTable where clientID = 5


Yes, since there is no index at all at your tables, this is not
strange.

First, I had to increase the number of rows per client to 100000 to
get a significant difference.

When I had run the first test, I ran these two statements:

CREATE CLUSTERED INDEX clientid_ix on bigTable (clientId)
go
DBCC DROPCLEANBUFFER S

The first statement builds an index on bigTable.client Id. The second
just cleans out the cache, so that all data will be read from disk.
(Don't do this on a production machine!)

I then ran the benchmarks. table_5 was still faster with 450 ms, where
as the SELECT COUNT(*) from bigTable needed 563 ms. However, on
successive runs, table_5 took 110 ms, where as the read from bigTable
was 16 ms.

Before you consider advanced techniques like splitting tables, you should
make sure that you have a sound index strategy. A clustered index has
the data as its leaf pages, so after adding the clustered index,
bigTable is really like table_1 to table_9 glued together.
--
Erland Sommarskog, SQL Server MVP, so****@algonet. se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techinf...2000/books.asp
Jul 20 '05 #5

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

14
2951
by: diskoduro | last post by:
Hi!! Years ago I built a database to control the production of a little factory. The users wanted to work in a Windows Net workgroup so I created an mdb with all the tables and data an after that every user had his own mde linked to the main mdb. Year by year, progressively the number of records has been growing and the speed of the application has decreased strongly, because of the grrrreat number of records stored.
10
1612
by: Kevin Lawrence | last post by:
Hi Anyone know how much 100MB is in SQL Server 2000? Is it a lot? Thanks Kev
8
2927
by: James | last post by:
Can someone explain the fundamental difference between creating a "multi user" version of an Access DB and creating a client-server Access DB? ie why can't all the users on my network just click on database.mdb and share the same database? why would I want to create a separate client/server architecture?
6
2495
by: Mudcat | last post by:
Hi, I am trying to build a tool that analyzes stock data. Therefore I am going to download and store quite a vast amount of it. Just for a general number - assuming there are about 7000 listed stocks on the two major markets plus some extras, 255 tradying days a year for 20 years, that is about 36 million entries. Obviously a database is a logical choice for that. However I've never used one, nor do I know what benefits I would get...
2
6956
by: Jobs | last post by:
Download the JAVA , .NET and SQL Server interview with answers Download the JAVA , .NET and SQL Server interview sheet and rate yourself. This will help you judge yourself are you really worth of attending interviews. If you own a company best way to judge if the candidate is worth of it. http://www.questpond.com/InterviewRatingSheet.zip
9
2284
by: KarlM | last post by:
After reading some articles regarding confuguration data I'm a bit confused. Where is the right place for storing configuration data? - XML-files? - registry? - INI-files? (from a users point of view, ini-files are more comfortable to read and edit) Where should I store user specific config data? Where should I store machine specific config data?
2
1346
by: Ilyas | last post by:
Hi all I need to implmenet paging across different tables. The tables all have a different name eg Data01, data02 data03 etc, however they are columns which are common to each table, but each table also has some unique columns My questions is that I want to display data from any one of these tables - I wont know which one until runtime, but since they contains large amounts of data, I only want to display say 10 at a time. Also
46
2162
by: RAZZ | last post by:
Hello, Can anyone suggest me solution? I Need to manage different types of documents (doc,xls,ppt etc) in server. I have folder structure to maintain these documents in server. Say folder1 is having all doc files; folder2 is having all xls files and so on. Now these documents should not be able to get access through the url
0
8709
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However, people are often confused as to whether an ONU can Work As a Router. In this blog post, we’ll explore What is ONU, What Is Router, ONU & Router’s main usage, and What is the difference between ONU and Router. Let’s take a closer look ! Part I. Meaning of...
0
9202
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers, it seems that the internal comparison operator "<=>" tries to promote arguments from unsigned to signed. This is as boiled down as I can make it. Here is my compilation command: g++-12 -std=c++20 -Wnarrowing bit_field.cpp Here is the code in...
1
8952
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows Update option using the Control Panel or Settings app; it automatically checks for updates and installs any it finds, whether you like it or not. For most users, this new feature is actually very convenient. If you want to control the update process,...
0
8909
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each protocol has its own unique characteristics and advantages, but as a user who is planning to build a smart home system, I am a bit confused by the choice of these technologies. I'm particularly interested in Zigbee because I've heard it does some...
0
7791
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing, and deployment—without human intervention. Imagine an AI that can take a project description, break it down, write the code, debug it, and then launch it, all on its own.... Now, this would greatly impact the work of software developers. The idea...
0
5894
by: conductexam | last post by:
I have .net C# application in which I am extracting data from word file and save it in database particularly. To store word all data as it is I am converting the whole word file firstly in HTML and then checking html paragraph one by one. At the time of converting from word file to html my equations which are in the word document file was convert into image. Globals.ThisAddIn.Application.ActiveDocument.Select();...
0
4649
by: adsilva | last post by:
A Windows Forms form does not have the event Unload, like VB6. What one acts like?
1
3081
by: 6302768590 | last post by:
Hai team i want code for transfer the data from one system to another through IP address by using C# our system has to for every 5mins then we have to update the data what the data is updated we have to send another system
3
2018
bsmnconsultancy
by: bsmnconsultancy | last post by:
In today's digital era, a well-designed website is crucial for businesses looking to succeed. Whether you're a small business owner or a large corporation in Toronto, having a strong online presence can significantly impact your brand's success. BSMN Consultancy, a leader in Website Development in Toronto offers valuable insights into creating effective websites that not only look great but also perform exceptionally well. In this comprehensive...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.