473,883 Members | 1,600 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

State of Beta 2


Anyone out there using beta 2 in production situations? Comments on
stability? I am rolling out a project in the next 4 weeks, and really
don't want to go though an upgrade soon after its released on an
Unsuspecting Client, so I would LIKE to start working with 7.4.

--------------------

Andrew Rawnsley
President
The Ravensfield Digital Resource Group, Ltd.
(740) 587-0114
www.ravensfield.com
---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddres sHere" to ma*******@postg resql.org)

Nov 11 '05
236 10127
Centuries ago, Nostradamus foresaw when ro***********@c ox.net (Ron Johnson) would write:
On Thu, 2003-09-18 at 16:29, Andrew Sullivan wrote:
On Sat, Sep 13, 2003 at 10:52:45AM -0500, Ron Johnson wrote:
> So instead of 1TB of 15K fiber channel disks (and the requisite
> controllers, shelves, RAID overhead, etc), we'd need *two* TB of
> 15K fiber channel disks (and the requisite controllers, shelves,
> RAID overhead, etc) just for the 1 time per year when we'd upgrade
> PostgreSQL?


Nope. You also need it for the time when your vendor sells
controllers or chips or whatever with known flaws, and you end up
having hardware that falls over 8 or 9 times in a row.


????


This of course never happens in real life; expensive hardware is
_always_ UTTERLY reliable.

And the hardware vendors all have the same high standards as, well,
certain database vendors we might think of.

After all, Oracle and MySQL AB would surely never mislead their
customers about the merits of their database products any more than
HP, Sun, or IBM would about the possibility of their hardware having
tiny flaws.

And I would never mislead anyone, either. I'm sure I got a full 8
hours sleep last night. I'm sure of it...
--
"cbbrowne","@", "cbbrowne.c om"
http://www3.sympatico.ca/cbbrowne/finances.html
"XML combines all the inefficiency of text-based formats with most of the
unreadability of binary formats :-) " -- Oren Tirosh
Nov 11 '05 #161
Centuries ago, Nostradamus foresaw when ro***********@c ox.net (Ron Johnson) would write:
On Thu, 2003-09-18 at 16:29, Andrew Sullivan wrote:
On Sat, Sep 13, 2003 at 10:52:45AM -0500, Ron Johnson wrote:
> So instead of 1TB of 15K fiber channel disks (and the requisite
> controllers, shelves, RAID overhead, etc), we'd need *two* TB of
> 15K fiber channel disks (and the requisite controllers, shelves,
> RAID overhead, etc) just for the 1 time per year when we'd upgrade
> PostgreSQL?


Nope. You also need it for the time when your vendor sells
controllers or chips or whatever with known flaws, and you end up
having hardware that falls over 8 or 9 times in a row.


????


This of course never happens in real life; expensive hardware is
_always_ UTTERLY reliable.

And the hardware vendors all have the same high standards as, well,
certain database vendors we might think of.

After all, Oracle and MySQL AB would surely never mislead their
customers about the merits of their database products any more than
HP, Sun, or IBM would about the possibility of their hardware having
tiny flaws.

And I would never mislead anyone, either. I'm sure I got a full 8
hours sleep last night. I'm sure of it...
--
"cbbrowne","@", "cbbrowne.c om"
http://www3.sympatico.ca/cbbrowne/finances.html
"XML combines all the inefficiency of text-based formats with most of the
unreadability of binary formats :-) " -- Oren Tirosh
Nov 11 '05 #162
ro***********@c ox.net (Ron Johnson) wrote:
On Thu, 2003-09-18 at 16:29, Andrew Sullivan wrote:
On Sat, Sep 13, 2003 at 10:52:45AM -0500, Ron Johnson wrote:
> So instead of 1TB of 15K fiber channel disks (and the requisite
> controllers, shelves, RAID overhead, etc), we'd need *two* TB of
> 15K fiber channel disks (and the requisite controllers, shelves,
> RAID overhead, etc) just for the 1 time per year when we'd upgrade
> PostgreSQL?


Nope. You also need it for the time when your vendor sells
controllers or chips or whatever with known flaws, and you end up
having hardware that falls over 8 or 9 times in a row.


????


This of course never happens in real life; expensive hardware is
_always_ UTTERLY reliable.

And the hardware vendors all have the same high standards as, well,
certain database vendors we might think of.

After all, Oracle and MySQL AB would surely never mislead their
customers about the merits of their database products any more than
HP, Sun, or IBM would about the possibility of their hardware having
tiny flaws.

And I would /never/ claim to have lost sleep as a result of flakey
hardware. Particularly not when it's a HA fibrechannel array. I'm
/sure/ that has never happened to anyone. [The irony herre should be
causing people to say "ow!"]
--
"cbbrowne","@", "cbbrowne.c om"
http://www3.sympatico.ca/cbbrowne/finances.html
"XML combines all the inefficiency of text-based formats with most of the
unreadability of binary formats :-) " -- Oren Tirosh
Nov 11 '05 #163
ro***********@c ox.net (Ron Johnson) wrote:
On Thu, 2003-09-18 at 16:29, Andrew Sullivan wrote:
On Sat, Sep 13, 2003 at 10:52:45AM -0500, Ron Johnson wrote:
> So instead of 1TB of 15K fiber channel disks (and the requisite
> controllers, shelves, RAID overhead, etc), we'd need *two* TB of
> 15K fiber channel disks (and the requisite controllers, shelves,
> RAID overhead, etc) just for the 1 time per year when we'd upgrade
> PostgreSQL?


Nope. You also need it for the time when your vendor sells
controllers or chips or whatever with known flaws, and you end up
having hardware that falls over 8 or 9 times in a row.


????


This of course never happens in real life; expensive hardware is
_always_ UTTERLY reliable.

And the hardware vendors all have the same high standards as, well,
certain database vendors we might think of.

After all, Oracle and MySQL AB would surely never mislead their
customers about the merits of their database products any more than
HP, Sun, or IBM would about the possibility of their hardware having
tiny flaws.

And I would /never/ claim to have lost sleep as a result of flakey
hardware. Particularly not when it's a HA fibrechannel array. I'm
/sure/ that has never happened to anyone. [The irony herre should be
causing people to say "ow!"]
--
"cbbrowne","@", "cbbrowne.c om"
http://www3.sympatico.ca/cbbrowne/finances.html
"XML combines all the inefficiency of text-based formats with most of the
unreadability of binary formats :-) " -- Oren Tirosh
Nov 11 '05 #164


On Thu, 18 Sep 2003, Lamar Owen wrote:
Marc G. Fournier wrote:
'K, I had already answered it as part of this thread when I suggested
doing exactly that ... in response to which several ppl questioned the
feasibility of setting up a duplicate system with >1TB of disk space to do
the replication over to ...


The quote mentioned is a question, not an answer. You said:
'k, but is it out of the question to pick up a duplicate server, and use
something like eRServer to replicate the databases between the two
systems, with the new system having the upgraded database version running
on it, and then cutting over once its all in sync?


'Something like eRserver' doesn't give me enough detail; so I asked if
eRserver could do this, mentioning specific version numbers. A straight
answer -- yes it can, or no it can't -- would be nice. So you're saying
that eRserver can do this, right? Now if there just wasn't that java
dependency.... Although the contrib rserv might suffice for data
migration capabilities.


Sorry, but I hadn't actually seen your question about it ... but, yes,
erserver can do this ... as far as I know, going from, say, v7.2 -> v7.4
shouldn't be an issue either, but I only know of a few doing v7.2->v7.3
migrations with it so far ...
---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster

Nov 11 '05 #165
That sounds good save two things. We need to state what are the project run
dates and what happens at or around the due date. That to say we have the
deliverable for testing (beta ready), more time is needed to complete core
features (alpha ready) and therefore more funds are needed, project is one hold
due to features needed outside the scope of the project, etc, etc, etc...

You get the idea.

Quoting "Joshua D. Drake" <jd@commandprom pt.com>:
Hello,

O.k. here are my thoughts on how this could work:

Command Prompt will set up an escrow account online at www.escrow.com.
When the Escrow account totals 2000.00 and is released, Command Prompt
will dedicate a
programmer for one month to debugging, documenting, reviewing,
digging, crying,
screaming, begging and bleeding with the code. At the end of the month
and probably during
depending on how everything goes Command Prompt will release its
findings. The findings
will include a project plan on moving forward over the next 5 months
(if that is what it takes) to
produce the first functional pg_upgrade.

If the project is deemed as moving in the right direction by the
community members and specifically
the core members we will setup milestone payments for the project.

What does everyone think?

Sincerely,

Joshua D. Drake
Dennis Gearon wrote:
I had already committed $50/mo.

Robert Creager wrote:
Once upon a time (Tue, 16 Sep 2003 21:26:05 -0700)
Dennis Gearon <ge*****@firese rve.net> uttered something amazingly
similar to:

Robert Creager wrote:

> Once upon a time (Tue, 16 Sep 2003 12:59:37 -0700)
> "Joshua D. Drake" <jd@commandprom pt.com> uttered something
> amazingly similar
> to:
>
>
>
>
>
>> If someone is willing to pony up 2000.00 per month for a period of
>> at
>
> Well, if you're willing to set up some sort of escrow, I'll put in
> $100. I
>

Is that $100 times once, or $100 X 6mos anticiapated develop time.

That's $100 once. And last I looked, there are well over 1800
subscribers on
this list alone. On the astronomically small chance everyone one of
them did
what I'm doing, it would cover more than 6 months of development time
;-) This
strikes me as like supporting public radio. The individuals do some,
and the
corporations do a bunch.

I'm just putting my money toward a great product, rather than
complaining that
it's not done. Just like Joshua is doing. You cannot hire a competent
programmer for $24k a year, so he is putting up some money on this also.

There have been a couple of other bytes from small businesses, so who
knows!

You game?

Cheers,
Rob


--
Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC
Postgresql support, programming shared hosting and dedicated hosting.
+1-503-222-2783 - jd@commandpromp t.com - http://www.commandprompt.com
The most reliable support for the most reliable Open Source database.

---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faqs/FAQ.html

--
Keith C. Perry
Director of Networks & Applications
VCSN, Inc.
http://vcsn.com

_______________ _______________ ______
This email account is being host by:
VCSN, Inc : http://vcsn.com

---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster

Nov 11 '05 #166
On Thu, 18 Sep 2003 12:11:18 -0400, Lamar Owen <lo***@pari.edu > wrote:
Marc G. Fournier wrote:
[...] upgrading is a key feature [...] a migration tool
that could read the old format _without_a_runn ing_old_backend _ [...]
the new backend is powerless to recover the old data.
OS upgrades [...], FreeBSD ports upgrades, and RPM
upgrades are absolutely horrid at this point. [...]
[censored] has a better system than we
[...] the pain of upgrading [...]
*I* should complain about a ramble? :-)


Lamar, I *STRONGLY* agree with almost everything you say here and in
other posts, except perhaps ...

You et al. seem to think that system catalog changes wouldn't be a
problem if only we could avoid page format changes. This is not
necessarily so. Page format changes can be handled without much
effort, if

.. the changes are local to each page (the introduction of a level
indicator in btree pages is a counter-example),

.. we can tell page type and version for every page,

.. the new format does not need more space than the old one.

You wrote earlier:
| the developers who changed the on-disk format ...

Oh, that's me, I think. I am to blame for the heap tuple header
changes between 7.2 and 7.3; Tom did some cleanup work behind me but
cannot be held responsible for the on-disk-format incompatibiliti es.
I'm not aware of any other changes falling into this category for 7.3.
So you might as well have used the singular form ;-)

| ... felt it wasn't important to make it continue working.

This is simply not true. Seamless upgrade is *very* important, IMHO.
See http://archives.postgresql.org/pgsql...6/msg00136.php
for example, and please keep in mind that I was still very "fresh" at
that time. Nobody demanded that I keep my promise and I got the
impression that a page format conversion tool was not needed because
there wouldn't be a pg_upgrade anyway.

Later, in your "Upgrading rant" thread, I even posted some code
(http://archives.postgresql.org/pgsql.../msg00294.php).
Unfortunately this went absolutely unnoticed, probably because it
looked so long because I fat-fingered the mail and included the code
twice. :-(
It's all
in the archives that nobdy seems willing to read over again. Why do we
even have archives if they're not going to be used?


Sic!

While I'm at it, here are some comments not directly addressed to
Lamar:

Elsewhere in this current thread it has been suggested that the
on-disk format will stabilize at some time in the future and should
then be frozen to ensure painless upgrades. IMHO, at the moment when
data structures are declared stable and immutable the project is dead.

And I don't believe the myth that commercial database vendors have
reached a stable on-disk representation. Whoever said this, is kindly
asked to reveal his source of insight.

A working pg_upgrade is *not* the first thing we need. What we need
first is willingness to not break backwards compatibility. When
Postgres adopts a strategy of not letting in any change unless it is
fully compatible with the previous format or accompanied by an upgrade
script/program/whatever, that would be a huge step forward. First
breaking things for six month or more and then, when the release date
comes nearer, trying to build an upgrade tool is not the right
approach.

A - hopefully not too unrealistic - vision: _At_any_time_ during a
development cycle for release n+1 it is possible to take a cvs
snapshot, build it, take any release n database cluster, run a
conversion script over it (or not), and start the new postmaster with
-D myOldDataDir ...

Granted, this slows down development, primarily while developers are
not yet used to it. But once the infrastructure is in place, things
should get easier. While a developer is working on a new feature he
knows the old data structures as well as the new ones; this is the
best moment to design and implement an upgrade path, which is almost
hopeless if tried several months later by someone else.

And who says that keeping compatibility in mind while developing new
features cannot be fun? I assure you, it is!

Servus
Manfred

---------------------------(end of broadcast)---------------------------
TIP 7: don't forget to increase your free space map settings

Nov 11 '05 #167
"Marc G. Fournier" <sc*****@postgr esql.org> writes:
hmmm ... k, is it feasible to go a release or two at a time without on
disk changes? if so, pg_upgrade might not be as difficult to maintain,
since, unless someone an figure out a way of doing it, 'on disk change
releases' could still require dump/reloads, with a period of stability in
between?
Yeah, for the purposes of this discussion I'm just taking "pg_upgrade "
to mean something that does what Bruce's old script does, namely
transfer the schema into the new installation using "pg_dump -s" and
then push the user tables and indexes physically into place. We could
imagine that pg_upgrade would later get some warts added to it to handle
some transformations of the user data, but that might or might not ever
need to happen.

I think we could definitely adopt a policy of "on-disk changes not
oftener than every X releases" if we had a working pg_upgrade, even
without doing any extra work to allow updates. People who didn't
want to wait for the next incompatible release could have their change
sooner if they were willing to do the work to provide an update path.
*Or* ... as we've seen more with this dev cycle then previous ones, how
much could be easily back-patched to the previous version(s) relatively
easily, without requiring on-disk changes?


It's very difficult to back-port anything beyond localized bug fixes.
We change the code too much --- for instance, almost no 7.4 patch will
apply exactly to 7.3 or before because of the elog-to-ereport changes.

But the real problem IMHO is we don't have the manpower to do adequate
testing of back-branch changes that would need to be substantially
different from what gets applied to HEAD. I think it's best to leave
that activity to commercial support outfits, rather than put community
resources into it.

(Some might say I have a conflict of interest here, since I work for Red
Hat which is one of said commercial support outfits. But I really do
think it's more reasonable to let those companies do this kind of
gruntwork than to expect the community hackers to do it.)

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faqs/FAQ.html

Nov 11 '05 #168
On Fri, 19 Sep 2003 17:38:13 -0400, Tom Lane <tg*@sss.pgh.pa .us>
wrote:
A working pg_upgrade is *not* the first thing we need.
Yes it is.


At the risk of being called a stubborn hairsplitter, I continue to say
that pg_upgrade is not the *first* thing we need. Maybe the second
....
As you say later,
... But once the infrastructure is in place, things
should get easier.

Yes, at some point in time we need an infrastructure/upgrade
process/tool/pg_upgrade, whatever we call it. What I tried to say is
that *first* developers must change their point of view and give
backwards compatibility a higher priority. As long as I don't write
page conversion functions because you changed the system catalogs and
you see no need for pg_upgrade because I broke the page format,
seamless upgrade cannot become a reality.
Until we have a working pg_upgrade, every little catalog change will
break backwards compatibility. And I do not feel that the appropriate
way to handle catalog changes is to insist on one-off solutions for each
one.
I tend to believe that every code change or new feature that gets
implemented is unique by its nature, and if it involves catalog
changes it requires a unique upgrade script/tool. How should a
generic tool guess the contents of a new catalog relation?

Rod's adddepend is a good example. It is a one-off upgrade solution,
which is perfectly adequate because Rod's dependency patch was a
singular work, too. Somebody had to sit down and code some logic into
a script.
Any quick look at the CVS logs will show that minor and major
catalog revisions occur *far* more frequently than changes that would
affect on-disk representation of user data.


Some catalog changes can be done by scripts executed by a standalone
backend, others might require more invasive surgery. Do you have any
feeling which kind is the majority?

I've tried to produce a prototype for seamless upgrade with the patch
announced in
http://archives.postgresql.org/pgsql...8/msg00937.php. It
implements new backend functionality (index scan cost estimation using
index correlation) and needs a new system table (pg_indexstat) to
work. I wouldn't call it perfect (for example, I still don't know how
to insert the new table into template0), but at least it shows that
there is a class of problems that require catalog changes and *can* be
solved without initdb.

Servus
Manfred

---------------------------(end of broadcast)---------------------------
TIP 6: Have you searched our list archives?

http://archives.postgresql.org

Nov 11 '05 #169
On Fri, 19 Sep 2003 18:51:00 -0400, Tom Lane <tg*@sss.pgh.pa .us>
wrote:
transfer the schema into the new installation using "pg_dump -s" and
then push the user tables and indexes physically into place.


I'm more in favour of in-place upgrade. This might seem risky, but I
think we can expect users to backup their PGDATA directory before they
start the upgrade.

I don't trust pg_dump because

.. it doesn't help when the old postmaster binaries are not longer
available

.. it does not always produce scripts that can be loaded without manual
intervention. Sometimes you create a dump and cannot restore it with
the same Postmaster version. RTA.

Servus
Manfred

---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to ma*******@postg resql.org so that your
message can get through to the mailing list cleanly

Nov 11 '05 #170

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

0
2393
by: Eric Raymond | last post by:
When installing the rpms (or the tar file) onto Red Hat Enteprise Linux AS Beta 1 (Taroon), we get the follwing error: Installing all prepared tables /usr/bin/mysql_install_db: line 1: 7690 Segmentation fault /usr/sbin/mysqld --bootstrap --skip-grant-tables --basedir=/ --datadir=/var/lib/mysql --skip-innodb --skip-bdb Installation of grant tables failed The log file simply shows a start and a stop of the server.
1
389
by: Andrew Rawnsley | last post by:
Anyone out there using beta 2 in production situations? Comments on stability? I am rolling out a project in the next 4 weeks, and really don't want to go though an upgrade soon after its released on an Unsuspecting Client, so I would LIKE to start working with 7.4. -------------------- Andrew Rawnsley President The Ravensfield Digital Resource Group, Ltd.
4
2349
by: Chad Crowder | last post by:
I've taken a look at this article http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnaspnet/html/asp12282000.asp which someone posted a month or so ago regarding setting up SQL server to handle state data. The article references .Net beta, and the file state.sql in the framwork/1.0x directory, but that file doesn't exist for version 1.1.4322. I'm wondering if there's a component that I need to install, or if I need to...
0
1462
by: Kevin Davidson | last post by:
I am trying to get a clean installation of the .NET framework 1.1 on Windows 2000 Professional. I had previously installed a beta of the 2.0 Framework and uninstalled it. I also uninstalled framework 1.1 and 1.0 and reinstalled 1.1 and service pack 1. When I attempt to start the ASP.NET State Service, I get event log: Event Type: Error Event Source: Service Control Manager Event Category: None
1
3001
by: alundi | last post by:
Greetings: I'd like to post this as a new thread to an article in microsoft.public.dotnet.languages.vb originally made by nevin and replied to by Alan Pretre back in December ("ThreadState == not in the enumeration"). Like nevin, I have a VB.NET multi-threaded application, and quite frequently, after my main application class will call Thread.Suspend() on the thread the ThreadState of the Suspended Thread will be a value of 96. From...
0
1004
by: Shell | last post by:
Hi All, We have an application that uses SQL Server 2000 for State Management data. We previously use to build and deploy this application using ASP.NET Beta 2 version. Now we are using ASP.NET RTM version.With that, we are able to build & deploy the application successfully. But it gives us the error "The Session State information is valid or might be corrupted." while retrieving data from SQL Server. We can see the data stored...
8
2263
by: Steve Thompson | last post by:
Hello all, I was wondering the differnced there were betwee Active State's python and the open source version of python. Would I have to unistall my opend souce python? Additonally, how does Active State's Komodo IDE vs. the eric3 IDE unler SuSE Linux v. 10.i? Addionally, is the eric IDE (version 3) an acceptible IDE or are there more easy and more productive IDE's for perl?
3
8719
by: Nathan Sokalski | last post by:
I am recieving the following error on the second postback of a page I have written: The state information is invalid for this page and might be corrupted Stack Trace: System.Convert.FromBase64String(String s) +0
70
4368
by: Anson.Stuggart | last post by:
I'm designing a debounce filter using Finite State Machine. The FSM behavior is it follows the inital input bit and thinks that's real output until it receives 3 consecutive same bits and it changes output to that 3 consecutive bit until next 3 consecutive bits are received. A reset will set the FSM to output 1s until it receives the correct input and ouput. This is the test sequence with input and correct output. 1 0 0 1 0 1 0 0 0 1...
0
9791
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can effortlessly switch the default language on Windows 10 without reinstalling. I'll walk you through it. First, let's disable language synchronization. With a Microsoft account, language settings sync across devices. To prevent any complications,...
0
11137
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers, it seems that the internal comparison operator "<=>" tries to promote arguments from unsigned to signed. This is as boiled down as I can make it. Here is my compilation command: g++-12 -std=c++20 -Wnarrowing bit_field.cpp Here is the code in...
1
10845
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows Update option using the Control Panel or Settings app; it automatically checks for updates and installs any it finds, whether you like it or not. For most users, this new feature is actually very convenient. If you want to control the update process,...
1
7970
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome a new presenter, Adolph Dupré who will be discussing some powerful techniques for using class modules. He will explain when you may want to use classes instead of User Defined Types (UDT). For example, to manage the data in unbound forms. Adolph will...
0
5797
by: TSSRALBI | last post by:
Hello I'm a network technician in training and I need your help. I am currently learning how to create and manage the different types of VPNs and I have a question about LAN-to-LAN VPNs. The last exercise I practiced was to create a LAN-to-LAN VPN between two Pfsense firewalls, by using IPSEC protocols. I succeeded, with both firewalls in the same network. But I'm wondering if it's possible to do the same thing, with 2 Pfsense firewalls...
0
5990
by: adsilva | last post by:
A Windows Forms form does not have the event Unload, like VB6. What one acts like?
1
4609
by: 6302768590 | last post by:
Hai team i want code for transfer the data from one system to another through IP address by using C# our system has to for every 5mins then we have to update the data what the data is updated we have to send another system
2
4215
muto222
by: muto222 | last post by:
How can i add a mobile payment intergratation into php mysql website.
3
3231
bsmnconsultancy
by: bsmnconsultancy | last post by:
In today's digital era, a well-designed website is crucial for businesses looking to succeed. Whether you're a small business owner or a large corporation in Toronto, having a strong online presence can significantly impact your brand's success. BSMN Consultancy, a leader in Website Development in Toronto offers valuable insights into creating effective websites that not only look great but also perform exceptionally well. In this comprehensive...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.