>I'm missing something in the MySQL clustering design. In the basic
design outlined in the How-To, there are two NDBD nodes, a manager node
and a MySQL node (ndb). If I now write an application that connects to
the single MySQL node, and it fails ... well then the redundant NDBD
nodes on the back end don't seem to help much.
Am I missing something here? I want put a design together to has no
single point of failure.
If you want to avoid a single point of failure, clients must be
given a list of servers to connect to, regardless of how redundant
the servers are, unless you've got multiple clients that can back
each other up (e.g. multiple web servers with identical content on
a level 4 switch, with provisions to shut off handing requests to
servers that are down). This applies to any client/server setup
(including DNS, web servers, and MySQL).
The MySQL cluster design has storage nodes and SQL nodes. A client
connects to a SQL node. Any SQL node can use any storage node.
You might get acceptable (but not as good as doing it right)
redundancy by putting a SQL node on each host with a client, and
each client tries to use only "localhost" as a server. This does
not account for failures where the client is up but the SQL node
on the same host is down (e.g. filesystem damage to programs,
network interface down, system running with limited memory too low
to run a SQL node, etc.).
The clients must also deal gracefully with the possibility that the
connection fails while it is being used: possibly by reconnecting
and starting over. Reconnecting and retrying from the current query
may be dangerous depending on the situation: you may not be on the
same database, last_insert_id() may not be set right, you may not
be in the middle of a transaction, various modes might not be set
right, etc.
Gordon L. Burditt