473,785 Members | 3,067 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

DataPropagator for z/OS New Version?

Hello to anyone,

We operateing one of the largest mainframe in our areas and we are using an
old version of datapropagator for z/os. Our machines has reached the max
DASD and cannot keep up with the logs of db2 to store. Is there a new
version of datapropagator for the z/os that will be coming out?

We looked at Q replication but it is slower than datapropagator because it
is using message queues to replicate data. It also brought down our
messaging service because when we used it for replicationing it flooded our
mq infrestructure.

Thanks you for any help that is appreciated.

F Jose Serrano
Nov 12 '05 #1
8 2637
Latest Version of DProp for z/OS is

http://www-306.ibm.com/software/data...edition_z.html

I'm not sure that it's gonna do anything to solve your problem though.
You need to have enough space to store the logs until DProp gets to them
.... and you are going to need to store the logs anyway for recovery
purposes. And how Q-Rep could have been slower I'm not sure ...
anecdotal reports are that the latency is 3X lower than SQL Replication.

Perhaps there is a tuning knob that can be modified ... maybe someone
else more familiar with Dprop can help. Perhaps you'd like to try the
IDUG site (www.idug.org), or open a PMR.

Larry Edelstein

Frodio Jose Serrano wrote:
Hello to anyone,

We operateing one of the largest mainframe in our areas and we are using an
old version of datapropagator for z/os. Our machines has reached the max
DASD and cannot keep up with the logs of db2 to store. Is there a new
version of datapropagator for the z/os that will be coming out?

We looked at Q replication but it is slower than datapropagator because it
is using message queues to replicate data. It also brought down our
messaging service because when we used it for replicationing it flooded our
mq infrestructure.

Thanks you for any help that is appreciated.

F Jose Serrano

Nov 12 '05 #2
RdR
The problem with DPROP for z/OS is that is stages the log changes it reads
through IFI (doing an IFCID306) in DB2 tables. Once you insert changed data
information on DB2 tables, that will go back to the log, once you prune
these tables and clean it up it puts more data on the db2 logs. So if you
have 1 million inserts, you will have 1 million log RBAs to scrape, plus it
will put another 1 million RBA entries in the logs once the changed data is
staged in DB2 tables, and when it is pruned, another one million records, so
to propagate 1 million changes, there will be 2 million additional log items
because of the staging and pruning. If it were updates, it will double
because it will have the before and after images (if needed). Logging to be
turned on is a pre-requisite of DPROP becuase if the tables are not log, the
change will not be detected.

The problem with Q replication - even though it is by design faster than
DPROP (becuase it does not stage in DB2 tables so you do not have the
"log-back effect" described above) - is that you need to upgrade your MQ
infrastructure. If you are originally using MQ for messaging, that means you
need to make sure that your MQ infrastructure will be able to accomodate
"replicatio n related messages". This may be the bottleneck and why MQ is
slower than DPROP. I really do not believe in using MQ as a means to
replicate data from point to point especially data that needs to be sent in
real time because of how MQ is designed, too many exchange points.

Federation may be a better option wherein data does not need to leave the
source database so no overhead in extra log.

RdR

"Larry" <la***@nospam.n et> wrote in message
news:F7******** **********@fe10 .lga...
Latest Version of DProp for z/OS is

http://www-306.ibm.com/software/data...edition_z.html
I'm not sure that it's gonna do anything to solve your problem though.
You need to have enough space to store the logs until DProp gets to them
... and you are going to need to store the logs anyway for recovery
purposes. And how Q-Rep could have been slower I'm not sure ...
anecdotal reports are that the latency is 3X lower than SQL Replication.

Perhaps there is a tuning knob that can be modified ... maybe someone
else more familiar with Dprop can help. Perhaps you'd like to try the
IDUG site (www.idug.org), or open a PMR.

Larry Edelstein

Frodio Jose Serrano wrote:
Hello to anyone,

We operateing one of the largest mainframe in our areas and we are using an old version of datapropagator for z/os. Our machines has reached the max
DASD and cannot keep up with the logs of db2 to store. Is there a new
version of datapropagator for the z/os that will be coming out?

We looked at Q replication but it is slower than datapropagator because it is using message queues to replicate data. It also brought down our
messaging service because when we used it for replicationing it flooded our mq infrestructure.

Thanks you for any help that is appreciated.

F Jose Serrano

Nov 12 '05 #3
Interesting since Q Replication was designed to replicate data from
point-to-point in almost real-time for HA situations, and does provide
much lower latency. I don't have the numbers, but I know IBM has
customers in the financial industry with geographically separated sites
getting thousands of txns per second with good latency.

Larry Edelstein

RdR wrote:
The problem with DPROP for z/OS is that is stages the log changes it reads
through IFI (doing an IFCID306) in DB2 tables. Once you insert changed data
information on DB2 tables, that will go back to the log, once you prune
these tables and clean it up it puts more data on the db2 logs. So if you
have 1 million inserts, you will have 1 million log RBAs to scrape, plus it
will put another 1 million RBA entries in the logs once the changed data is
staged in DB2 tables, and when it is pruned, another one million records, so
to propagate 1 million changes, there will be 2 million additional log items
because of the staging and pruning. If it were updates, it will double
because it will have the before and after images (if needed). Logging to be
turned on is a pre-requisite of DPROP becuase if the tables are not log, the
change will not be detected.

The problem with Q replication - even though it is by design faster than
DPROP (becuase it does not stage in DB2 tables so you do not have the
"log-back effect" described above) - is that you need to upgrade your MQ
infrastructure. If you are originally using MQ for messaging, that means you
need to make sure that your MQ infrastructure will be able to accomodate
"replicatio n related messages". This may be the bottleneck and why MQ is
slower than DPROP. I really do not believe in using MQ as a means to
replicate data from point to point especially data that needs to be sent in
real time because of how MQ is designed, too many exchange points.

Federation may be a better option wherein data does not need to leave the
source database so no overhead in extra log.

RdR

"Larry" <la***@nospam.n et> wrote in message
news:F7******** **********@fe10 .lga...
Latest Version of DProp for z/OS is


http://www-306.ibm.com/software/data...edition_z.html
I'm not sure that it's gonna do anything to solve your problem though.
You need to have enough space to store the logs until DProp gets to them
... and you are going to need to store the logs anyway for recovery
purposes. And how Q-Rep could have been slower I'm not sure ...
anecdotal reports are that the latency is 3X lower than SQL Replication.

Perhaps there is a tuning knob that can be modified ... maybe someone
else more familiar with Dprop can help. Perhaps you'd like to try the
IDUG site (www.idug.org), or open a PMR.

Larry Edelstein

Frodio Jose Serrano wrote:
Hello to anyone,

We operateing one of the largest mainframe in our areas and we are using
an
old version of datapropagator for z/os. Our machines has reached the max
DASD and cannot keep up with the logs of db2 to store. Is there a new
version of datapropagator for the z/os that will be coming out?

We looked at Q replication but it is slower than datapropagator because
it
is using message queues to replicate data. It also brought down our
messaging service because when we used it for replicationing it flooded
our
mq infrestructure.

Thanks you for any help that is appreciated.

F Jose Serrano


Nov 12 '05 #4
Hi Larry / RdR,

Thanks for the comments of yours. RdR is very correct, that is what it is
happening to us with datapropagator. Q replication needs MQ and MQ can be a
bottlenecks when volume of data is pushed through it, it is not scalable,
after 10 million records or so a day, it is kaput and also recovery if one
of your queues fail is so hard and very very consuming the time very much, a
lot of time in fact and sometimes we just reload the data. We are sending
100 million to 200 million changes on a banking system in our area a day. MQ
is not really point to point, it is log scrape to message q queue then to
another message q queue then to db2 so there are four points of failure and
posible bottlenecks. We need to scale up our mq inferstucture but that will
be very expensive. I agree with RdR, MQ should not be used for high volume
replication. Maybe for small volume like ten million records transactiones a
day it may scale but seems that will still have some volume movement
problems.

Fedaration seems to be the only option. Or what we are really looking for is
a new version of datapropagator that does not stage in db2 tables but not
message queues usageing. If we can save the logs from getting log
informationes from changed data staged in db2, that will ease up a lot of
resource and latency issues. If there's is a product that reads the logs but
does not stage in db2, we will save a lot of cpu and DDASD resources not to
mention the headache our dba is going through maintaining DB2 for
replicationes purposes (the staging db2 tables). I heard Striva by
Informatica does this but we really have to go IBM first only if IBM cannot
do it that we go outside.

Thanks so much for the appreciated answers.

F Jose Seranno

"Larry" <la***@nospam.n et> wrote in message
news:oP******** *********@fe09. lga...
Interesting since Q Replication was designed to replicate data from
point-to-point in almost real-time for HA situations, and does provide
much lower latency. I don't have the numbers, but I know IBM has customers
in the financial industry with geographically separated sites getting
thousands of txns per second with good latency.

Larry Edelstein

RdR wrote:
The problem with DPROP for z/OS is that is stages the log changes it
reads
through IFI (doing an IFCID306) in DB2 tables. Once you insert changed
data
information on DB2 tables, that will go back to the log, once you prune
these tables and clean it up it puts more data on the db2 logs. So if you
have 1 million inserts, you will have 1 million log RBAs to scrape, plus
it
will put another 1 million RBA entries in the logs once the changed data
is
staged in DB2 tables, and when it is pruned, another one million records,
so
to propagate 1 million changes, there will be 2 million additional log
items
because of the staging and pruning. If it were updates, it will double
because it will have the before and after images (if needed). Logging to
be
turned on is a pre-requisite of DPROP becuase if the tables are not log,
the
change will not be detected.

The problem with Q replication - even though it is by design faster than
DPROP (becuase it does not stage in DB2 tables so you do not have the
"log-back effect" described above) - is that you need to upgrade your MQ
infrastructure. If you are originally using MQ for messaging, that means
you
need to make sure that your MQ infrastructure will be able to accomodate
"replicatio n related messages". This may be the bottleneck and why MQ is
slower than DPROP. I really do not believe in using MQ as a means to
replicate data from point to point especially data that needs to be sent
in
real time because of how MQ is designed, too many exchange points.

Federation may be a better option wherein data does not need to leave the
source database so no overhead in extra log.

RdR

"Larry" <la***@nospam.n et> wrote in message
news:F7******** **********@fe10 .lga...
Latest Version of DProp for z/OS is


http://www-306.ibm.com/software/data...edition_z.html
I'm not sure that it's gonna do anything to solve your problem though.
You need to have enough space to store the logs until DProp gets to them
... and you are going to need to store the logs anyway for recovery
purposes. And how Q-Rep could have been slower I'm not sure ...
anecdotal reports are that the latency is 3X lower than SQL Replication.

Perhaps there is a tuning knob that can be modified ... maybe someone
else more familiar with Dprop can help. Perhaps you'd like to try the
IDUG site (www.idug.org), or open a PMR.

Larry Edelstein

Frodio Jose Serrano wrote:

Hello to anyone,

We operateing one of the largest mainframe in our areas and we are using


an
old version of datapropagator for z/os. Our machines has reached the max
DASD and cannot keep up with the logs of db2 to store. Is there a new
version of datapropagator for the z/os that will be coming out?

We looked at Q replication but it is slower than datapropagator because


it
is using message queues to replicate data. It also brought down our
messaging service because when we used it for replicationing it flooded


our
mq infrestructure.

Thanks you for any help that is appreciated.

F Jose Serrano



Nov 12 '05 #5
Hi ,

Here is a link to a Replication performance white paper detailing the
results from studies done at IBM's Silicon Valley Lab to compare the
performance in throughput, latency and MIP usage between SQL
Replication (DataPropagator ) and Q Replication.
http://www-128.ibm.com/developerwork...m-0503aschoff/

Those measurements as well as results we are seeing from our existing
customers that are in production with Q Replication clearly show that:
o Q Replication is faster then SQL Replication and consumes less CPU
o The Q Replication solution - including MQ as a transport - can scale
nicely to handle very large workloads

In your workload you say you may be sending 200 million changes a day.
That breaks down to approximately 2,300 changes/rows a second. We have
measured Q Replication handling approximately 13,000 rows a second that
is over 1 Billion changes in 24 hours with latency around 1 second. One
of our production customers replicating brokerage data from Texas to
the Boston area happily shared with us that they replicated almost 1
million transactions with average latency of 1.3 seconds during the
Stock Market hours. I don't know how many changes are in their
transaction to do a row comparison but note also that is was during
their busy production time so replication is also constrained by how
much system resource it can take.
From what you describe in your posting, I don't see a performance issue

with Q Replication. You yourself said it flooded your MQ infrastructure
(I can't tell whether that means your network or something in MQ
itself) which means it performed as designed - FAST. I venture also to
say that your issue also is not MQ since a number of customers and us
at IBM have configured MQ to achieve very good results.

We have seen thus far in our tests and at some customers shops that a
slow network and slow DASD are the biggest contributors to poor
replication performance. Maybe one of those is really the performance
issues you are having.

Jaime F. Anaya

Nov 12 '05 #6
Hi Jaime,

Our issue with datapropagator is that it stages in db2 tables, that is the
root cause of all the cpu usage issues, latency issue, big log issues, high
DASD usage, difficulty in recovering from an unplanned outage (sometimes
even on planned outages), and th ebiggest headaches for our DBAs is that
they worry about DB2 not because of the database but because they have to
make sure that the staging tables will be big enough to handle those big
volumes. If the staging is not in DB2 tables, there will be no problem in
terms of CPU usage and DASD that we needs to maintains. In the past, the IBM
solution was to upgrade our mainframe, buy more DASD, upgrade CPU power, go
into datasharing group but we came to a point no more upgrade but to buy new
machine. Datapropagator (almost given for free) was really a resource hogger
and was just compensationed by faster machines, more DASD for the logs. Good
for latency for 1 million changes but degrades performance from there.

Q replication could have alleviated that but the problems is are that the
way to send data is through message queues. I think your studies that you
presented are based on the assumtions that the message queue is properly
configured. In our case, we are originally using the message queues just for
simple messaging. so it is not properly sized. Once we used the mq
infrestructures for this is Q replication product, it flooded our mq
infrestructure and the logical solution is to upgrade things like bandwidth
, machines, network , etc. which is very expensive. Aslo we do not want to
pass the headache of our DB2 dbas to our MQ infrastructure workers to handle
all these recovery procedures which is very poor in MQ - Q replication.
There is a big chance we need to refresh data and refreshing our total
number of records whihc is in the billions of transactions will take days to
refresh.

Before we look at other products, I wanted to make sure that there is no
available version of datapropagator or even replidata that does not stage in
db tables. If I find that ther eis no datapropagator upgrade path, then we
will look at other products that does real time replication point to point.

Those studies and papers you ointed to a prerequisite needed is a properly
configured mq environment which we will not invest on. So it is a new
datapropagator that does not stage in db2 tables or time to look outside
IBM.

But still apreciates your effort to point me to those documents. Thanks you.

F Jose Serrano

<ja****@us.ibm. com> wrote in message
news:11******** *************@f 14g2000cwb.goog legroups.com...
Hi ,

Here is a link to a Replication performance white paper detailing the
results from studies done at IBM's Silicon Valley Lab to compare the
performance in throughput, latency and MIP usage between SQL
Replication (DataPropagator ) and Q Replication.
http://www-128.ibm.com/developerwork...m-0503aschoff/

Those measurements as well as results we are seeing from our existing
customers that are in production with Q Replication clearly show that:
o Q Replication is faster then SQL Replication and consumes less CPU
o The Q Replication solution - including MQ as a transport - can scale
nicely to handle very large workloads

In your workload you say you may be sending 200 million changes a day.
That breaks down to approximately 2,300 changes/rows a second. We have
measured Q Replication handling approximately 13,000 rows a second that
is over 1 Billion changes in 24 hours with latency around 1 second. One
of our production customers replicating brokerage data from Texas to
the Boston area happily shared with us that they replicated almost 1
million transactions with average latency of 1.3 seconds during the
Stock Market hours. I don't know how many changes are in their
transaction to do a row comparison but note also that is was during
their busy production time so replication is also constrained by how
much system resource it can take.
From what you describe in your posting, I don't see a performance issue

with Q Replication. You yourself said it flooded your MQ infrastructure
(I can't tell whether that means your network or something in MQ
itself) which means it performed as designed - FAST. I venture also to
say that your issue also is not MQ since a number of customers and us
at IBM have configured MQ to achieve very good results.

We have seen thus far in our tests and at some customers shops that a
slow network and slow DASD are the biggest contributors to poor
replication performance. Maybe one of those is really the performance
issues you are having.

Jaime F. Anaya

Nov 12 '05 #7
Hi F Jose,

What version of DataProgator are you running? In V8 we did make some
performance improvement but the architecture stays the same so you
still would have the same issues you pointed out. Replidata use MQ as
well and would also require an MQ and network infrastructure that could
handle the load.

You may be able to find a product that uses raw TCP/IP (I assume that
is what you mean by point-to-point). DataPropagator and Q Replication
do not have that ability as of yet - maybe in the future but that is
hard to say right now. However, in order for that product to achieve
high throughput and low latency, it will also require an improved
network. It should flood your network worse then Q Replication because
it is not slowed by the MQ overhead of having to persist messages in
logs.

I guess I'm finding it difficult to see how you can achieve the latency
and throughput numbers you want without the investment in the
underlying network. You already have MQ so the only additional MQ costs
I can think off is the need for an license upgrade(?) and possibly the
need for better DASD for MQ (?). If you don't mind sharing where the
cost issues are I may be able to use it to influence cost structures
that are more reasonable. Thanks and good luck.

Nov 12 '05 #8
CV
Hello Frodio,

We used Datapropagator to send Data from one z/OS to another z/OS and we
also tried Q Rep. We have the same issue like you with DataPropagator, it is
not the disk nor the network, it is the logs coming back from the staging
table that causes volumes and volumes of log active and archived. We tried Q
replication, faster, better but recovery from an outage was very poor. Q
Replicator is three to four times faster, no side effect of the logs coming
back but the staging being in a Message Q, but if for some reason the
message queue where staging gets wiped out and the changed data has already
been read and staged, it was so hard to recover, we needed to refresh. We
have bigger volumes than you about a billion records a day with 100,000
records a second peak times, Q replication may be able to handle this but
watch out if you need to recover from an outage, it will be very difficult
to start from where you left off.

We ended up buying a product called Transformation Server for z/OS from a
company called DataMirror. They have a product that gets changed data
information from the DB2 logs through the DB2 Instrumentation Facility
Interface similar to what Data Propagator does but they stage in data spaces
(memory) not in tables. Since it is memory I/O is so fast, faster than DB2
tables or message queues. Since it is not in DB2 tables, no DB2 staged
tables to maintain and worry about getting back to the logs. Once the unit
of work is complete in memory, it leaves the source via TCP/IP and goes
directly to the target side. If Q Replication can scale, this Transformation
Server can scale more. It scrapes the active and archived logs for changed
data. It has been in production for five years now. The good thing about
this Transformation Server for z/OS is that is supports most of the popular
databases as sources and targets and reads the logs of the other non-db2
databases plus all flavors of DB2 including the iSeries, something Q
replication does not support (very surprising!!!).

If you are looking for a log scraping solution that scrapes the logs and not
stage in DB2 tables or DASD based data sets, then goes staright to TCP/IP
then this is what you need to look at. There is nothing more direct than
this. Also recovery is start from where you left off planned or unplanned
outage. Since staging is in memory no DB2 maintenance related to staging and
pruning tables.

By the way I am not a sales person for this product, I am just one of their
happy customers who after five years using this product. You can go to IDUG
(www.idug.org) to get more info on them, this is where I first saw this
product.

Hope this helps.

CV

"Frodio Jose Serrano" <fj*******@banc erosbrasilanos. br> wrote in message
news:nv******** ********@nnrp.c a.mci.com!nnrp1 .uunet.ca...
Hi Jaime,

Our issue with datapropagator is that it stages in db2 tables, that is the
root cause of all the cpu usage issues, latency issue, big log issues, high DASD usage, difficulty in recovering from an unplanned outage (sometimes
even on planned outages), and th ebiggest headaches for our DBAs is that
they worry about DB2 not because of the database but because they have to
make sure that the staging tables will be big enough to handle those big
volumes. If the staging is not in DB2 tables, there will be no problem in
terms of CPU usage and DASD that we needs to maintains. In the past, the IBM solution was to upgrade our mainframe, buy more DASD, upgrade CPU power, go into datasharing group but we came to a point no more upgrade but to buy new machine. Datapropagator (almost given for free) was really a resource hogger and was just compensationed by faster machines, more DASD for the logs. Good for latency for 1 million changes but degrades performance from there.

Q replication could have alleviated that but the problems is are that the
way to send data is through message queues. I think your studies that you
presented are based on the assumtions that the message queue is properly
configured. In our case, we are originally using the message queues just for simple messaging. so it is not properly sized. Once we used the mq
infrestructures for this is Q replication product, it flooded our mq
infrestructure and the logical solution is to upgrade things like bandwidth , machines, network , etc. which is very expensive. Aslo we do not want to
pass the headache of our DB2 dbas to our MQ infrastructure workers to handle all these recovery procedures which is very poor in MQ - Q replication.
There is a big chance we need to refresh data and refreshing our total
number of records whihc is in the billions of transactions will take days to refresh.

Before we look at other products, I wanted to make sure that there is no
available version of datapropagator or even replidata that does not stage in db tables. If I find that ther eis no datapropagator upgrade path, then we
will look at other products that does real time replication point to point.
Those studies and papers you ointed to a prerequisite needed is a properly
configured mq environment which we will not invest on. So it is a new
datapropagator that does not stage in db2 tables or time to look outside
IBM.

But still apreciates your effort to point me to those documents. Thanks you.
F Jose Serrano

<ja****@us.ibm. com> wrote in message
news:11******** *************@f 14g2000cwb.goog legroups.com...
Hi ,

Here is a link to a Replication performance white paper detailing the
results from studies done at IBM's Silicon Valley Lab to compare the
performance in throughput, latency and MIP usage between SQL
Replication (DataPropagator ) and Q Replication.
http://www-128.ibm.com/developerwork...dm-0503aschoff
/
Those measurements as well as results we are seeing from our existing
customers that are in production with Q Replication clearly show that:
o Q Replication is faster then SQL Replication and consumes less CPU
o The Q Replication solution - including MQ as a transport - can scale
nicely to handle very large workloads

In your workload you say you may be sending 200 million changes a day.
That breaks down to approximately 2,300 changes/rows a second. We have
measured Q Replication handling approximately 13,000 rows a second that
is over 1 Billion changes in 24 hours with latency around 1 second. One
of our production customers replicating brokerage data from Texas to
the Boston area happily shared with us that they replicated almost 1
million transactions with average latency of 1.3 seconds during the
Stock Market hours. I don't know how many changes are in their
transaction to do a row comparison but note also that is was during
their busy production time so replication is also constrained by how
much system resource it can take.
From what you describe in your posting, I don't see a performance issue

with Q Replication. You yourself said it flooded your MQ infrastructure
(I can't tell whether that means your network or something in MQ
itself) which means it performed as designed - FAST. I venture also to
say that your issue also is not MQ since a number of customers and us
at IBM have configured MQ to achieve very good results.

We have seen thus far in our tests and at some customers shops that a
slow network and slow DASD are the biggest contributors to poor
replication performance. Maybe one of those is really the performance
issues you are having.

Jaime F. Anaya


Nov 12 '05 #9

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

1
7269
by: angelag | last post by:
I am currently taking a college course in Visual Basic.Net and I am a beginner. I bought Visual Studio.Net 2003 to do my homework at home. I built my first project and e-mailed it to myself at school. When I tried to open it in the lab, I got a message saying I couldn't open it because it was created with a newer version. Evidently the lab is using Visual Studio.Net 2002. My professor doesn't just want the executable file, he wants...
16
2755
by: Manlio Perillo | last post by:
Hi. I'm a new user of Python but I have noted a little problem. Python is a very good language but it is evolving, in particular its library is evolving. This can be a problem when, ad example, a module change its interface or its implementation in a fundamental way (an example: wxPython). This, I think, can be resolved by allowing an user to explicitly say what version of a module it wants (sush as version numbers in Linux shared...
2
8808
by: Terence Shek | last post by:
Is there a way to set the application binding policy so that it always binds to the latest version of an assembly? I'm hoping there is a way to avoid updating the application's binding configuration every time there is an update to a shared assembly.
2
630
by: j.b.messina | last post by:
This has not yet been published by Microsoft. It will be published within the next few weeks, mainly because I asked them to. I felt this was information badly needed, and I think this is the best group to share this information. A co-worker and I were able to determine how to tell exactly what version of .NET Framework is installed. We had to do this because we had to manually deploy the MS05-004 Security Bulletin to thousands of...
4
2840
by: Earl T | last post by:
When I try to get the netscape version for version 7, I get the HttpBrowserCapabilities class returning the version as 5 and not 7. (see code and output below) CODE HttpBrowserCapabilities bc; string s; bc = Request.Browser; ....
3
3071
by: Shadow Lynx | last post by:
At the bottom of the default Error page that appears when Unhandled Exceptions occur, what exactly is the difference between the "Microsoft ..Net Framework Version" and the "ASP.NET Version"? I understand that the ASP.Net version is the version of ASP.Net that the current site is running under and it can be retreived with System.Environment.Version.ToString. What exactly is the Microsoft .NET Framework Version that is displayed? It is...
4
6371
by: Mike L | last post by:
Error occurs on "System.Deployment.Application.ApplicationDeployment.CurrentDeployment" ** Here is my code private void frmMain_Load(object sender, System.EventArgs e) { System.Deployment.Application.ApplicationDeployment ad = System.Deployment.Application.ApplicationDeployment.CurrentDeployment;
0
1924
by: ev951 | last post by:
I am not that familiar with XML or XSL and I am trying to sort application version number strings in an XML file that my team uses for application installations on our Linux servers. I have tried the xsl:data-type=number, but that doesn't work. The output of the XML file is exactly what the XML file has and I would like to sort it by the number value. Below is the some of the output from the XML file. <?xml version="1.0"?>...
8
3503
by: schaf | last post by:
Hi Ng! My application (version 1 a1) communicates with a service (version 1 s1). Now I would like to update the service and create a service version 2 (s2). The new function calls within s2 are implemented in a new interface, which derive from the old one to ensure that an old version of my application (a1) still works with s2. If i run my new version of the application a2 with s1 I get a InvalidCastException (Return argument has an...
0
9645
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However, people are often confused as to whether an ONU can Work As a Router. In this blog post, we’ll explore What is ONU, What Is Router, ONU & Router’s main usage, and What is the difference between ONU and Router. Let’s take a closer look ! Part I. Meaning of...
0
9480
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can effortlessly switch the default language on Windows 10 without reinstalling. I'll walk you through it. First, let's disable language synchronization. With a Microsoft account, language settings sync across devices. To prevent any complications,...
0
10148
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven tapestry of website design and digital marketing. It's not merely about having a website; it's about crafting an immersive digital experience that captivates audiences and drives business growth. The Art of Business Website Design Your website is...
0
9950
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each protocol has its own unique characteristics and advantages, but as a user who is planning to build a smart home system, I am a bit confused by the choice of these technologies. I'm particularly interested in Zigbee because I've heard it does some...
0
8972
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing, and deployment—without human intervention. Imagine an AI that can take a project description, break it down, write the code, debug it, and then launch it, all on its own.... Now, this would greatly impact the work of software developers. The idea...
1
7499
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome a new presenter, Adolph Dupré who will be discussing some powerful techniques for using class modules. He will explain when you may want to use classes instead of User Defined Types (UDT). For example, to manage the data in unbound forms. Adolph will...
0
6740
by: conductexam | last post by:
I have .net C# application in which I am extracting data from word file and save it in database particularly. To store word all data as it is I am converting the whole word file firstly in HTML and then checking html paragraph one by one. At the time of converting from word file to html my equations which are in the word document file was convert into image. Globals.ThisAddIn.Application.ActiveDocument.Select();...
0
5511
by: adsilva | last post by:
A Windows Forms form does not have the event Unload, like VB6. What one acts like?
1
4053
by: 6302768590 | last post by:
Hai team i want code for transfer the data from one system to another through IP address by using C# our system has to for every 5mins then we have to update the data what the data is updated we have to send another system

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.