Guys I am really stuck on this one. Any help or suggestions would be
appreciated.
We have a large table which seemed to just hit some kind of threshold.
They query is somewhat responsive when there are NO indexes on the
table. However, when we index email the query takes forever.
FACTS
- The problem is very "data specific". I can not recreate the
problem using different data.
- There is only a problem when I index email on the base table.
- The problem goes away when I add "AND b.email IS NOT NULL" to the
inner join condition. It does not help when I add the logic to the
"WHERE" clause.
DDL
CREATE TABLE base (bk char(25), email varchar(100))
create clustered index icx on base(bk)
create index ix_email on base(email)
CREATE TABLE filter (bk char(25), email varchar(100))
create clustered index icx on filter (bk)
create index ix_email on filter (email)
Query
SELECT b.bk, b.email
FROM base b WITH(NOLOCK)
INNER JOIN filter f ON f.email = b.email
--and f.email is not null
Data Profile
--35120500, 35120491, 14221553
SELECT COUNT(*) ,COUNT(DISTINCT bk), COUNT(DISTINCT email)
FROM base
--16796199, 16796192, 14221553
SELECT COUNT(*) ,COUNT(DISTINCT bk), COUNT(DISTINCT email)
FROM base
WHERE email IS NOT NULL
--250552, 250552, 250205
SELECT COUNT(*) ,COUNT(DISTINCT bk), COUNT(DISTINCT email)
FROM filter
--250208, 250208, 250205
SELECT COUNT(*) ,COUNT(DISTINCT bk), COUNT(DISTINCT email)
FROM filter
WHERE email IS NOT NULL
Jan 4 '07
18 5041
I suspect the reason (...AND ... IS NOT NULL) is helping you out is
that it's forcing the optimizer to choose a table scan instead if
hitting up that index you made on email.
Before you go changing minutiae like ANSI_NULLS, do realize that your
tables are, (relationally speaking), nonsense. Any table without a
PRIMARY KEY or otherwise unique key is likely going to cause problems
with the optimizer, and it often happens in bizarre ways like this.
You should do the following:
1) Make the clustered index on "bk" UNIQUE. If you can't do that...
2) Add email to your clustered index as a secondary key and recreate
the clustered index as UNIQUE, or make the combination of the two
columns a PRIMARY KEY. If you can't do that...
3) Analyze your data and fix your process to not allow wholly duplicate
rows, which are nonsensical. If you can't do THAT (ugh):
4) Add an IDENTITY column to your table. So as not to drastically
increase your index sizes, you could either add this column as a
secondary key after bk and make the clustered index UNIQUE, or you
could create a nonclustered PRIMARY KEY on that identity.
Let me know how it goes.
-Dave Markle http://www.markleconsulting.com
Erland Sommarskog wrote:
Dave (da******@gmail .com) writes:
Erland that was an excellent post! However it was not my problem. The
code is run from .NET application and I am testing using SQL Management
Studio.
I added this to the test script and it did not help.
SET ANSI_NULLS ON
As long it's a loose script, ANSI_BULLS would be on by default. I will
have to admit that I was clutching a straws. Without access to the
database seeing it, it is very difficult to analyse the problem accurately.
It's not alwyas that easy even if you have full acecss.
--
Erland Sommarskog, SQL Server MVP, es****@sommarsk og.se
Books Online for SQL Server 2005 at http://www.microsoft.com/technet/pro...ads/books.mspx
Books Online for SQL Server 2000 at http://www.microsoft.com/sql/prodinf...ons/books.mspx
I like your explanation!
The table does not currently have a primary key.
I basically have options 2 and 4 available; 1 and 3 are out of my
control.
I will lobby for adding an identity column to this table. It happens
to be very wide with 12 single column indexes so I really don't want
to make the clustered super wide.
Just for clarities sake, in order for the optimizer to properly
understand the tables, do I have to add a PK constraint to the identity
column?
>Any table without a
PRIMARY KEY or otherwise unique key is likely going to cause problems
with the optimizer, and it often happens in bizarre ways like this.
Do you know of any book or other reference to this (specifically how
the optimizer needs a key)? It would really help me convince everyone
to use keys on everything.
Thanks!
dmarkle wrote:
I suspect the reason (...AND ... IS NOT NULL) is helping you out is
that it's forcing the optimizer to choose a table scan instead if
hitting up that index you made on email.
Before you go changing minutiae like ANSI_NULLS, do realize that your
tables are, (relationally speaking), nonsense. Any table without a
PRIMARY KEY or otherwise unique key is likely going to cause problems
with the optimizer, and it often happens in bizarre ways like this.
You should do the following:
1) Make the clustered index on "bk" UNIQUE. If you can't do that...
2) Add email to your clustered index as a secondary key and recreate
the clustered index as UNIQUE, or make the combination of the two
columns a PRIMARY KEY. If you can't do that...
3) Analyze your data and fix your process to not allow wholly duplicate
rows, which are nonsensical. If you can't do THAT (ugh):
4) Add an IDENTITY column to your table. So as not to drastically
increase your index sizes, you could either add this column as a
secondary key after bk and make the clustered index UNIQUE, or you
could create a nonclustered PRIMARY KEY on that identity.
Let me know how it goes.
-Dave Markle http://www.markleconsulting.com
Erland Sommarskog wrote:
Dave (da******@gmail .com) writes:
Erland that was an excellent post! However it was not my problem. The
code is run from .NET application and I am testing using SQL Management
Studio.
>
I added this to the test script and it did not help.
SET ANSI_NULLS ON
As long it's a loose script, ANSI_BULLS would be on by default. I will
have to admit that I was clutching a straws. Without access to the
database seeing it, it is very difficult to analyse the problem accurately.
It's not alwyas that easy even if you have full acecss.
--
Erland Sommarskog, SQL Server MVP, es****@sommarsk og.se
Books Online for SQL Server 2005 at http://www.microsoft.com/technet/pro...ads/books.mspx
Books Online for SQL Server 2000 at http://www.microsoft.com/sql/prodinf...ons/books.mspx
Well, it comes mostly from relational and set theory. I like to
explain the need for PK's like this:
If your wife makes you go to the grocery store and she asks you what
you bought, you'd give a set back as your answer:
"Peas, carrots, potatoes, and Coca-Cola"
You wouldn't say:
"Peas, carrots, potatoes, peas, and Coca-Cola".
The two answers aren't equivalent. One simply makes sense, and one
does not. For example, in set theory, you can express a set of numbers
as:
{1,2,3,4}
but not:
{1,2,2,3,4}. The second representation is invalid.
Like giving your grocery list or giving a set of numbers, the engine's
theoretical foundation is based on relational theory, which is closely
tied to set theory.
If you want a book with a recommendation, "Inside SQL Server" as well
as a lot of other DB books out there will tell you the same thing --
always have a PK on each table you create.
I'm sorry, I'll admit that I don't have the hard evidence to prove to
you that not having a PK is going to screw up the optimizer. I do try
to stay away from "cargo cult" type "I saw this once..." sort of
recommendations , but in this case, since not having a PK on each table
is such a no-no, I'd try this first. I like #2 best. If "email" is
distinct in the table by itself, that's going to be even better...
Maybe copy your data over to a development database and try these out
and see how it goes.
Good luck.
-Dave
Dave wrote:
I like your explanation!
The table does not currently have a primary key.
I basically have options 2 and 4 available; 1 and 3 are out of my
control.
I will lobby for adding an identity column to this table. It happens
to be very wide with 12 single column indexes so I really don't want
to make the clustered super wide.
Just for clarities sake, in order for the optimizer to properly
understand the tables, do I have to add a PK constraint to the identity
column?
>Any table without a
PRIMARY KEY or otherwise unique key is likely going to cause problems
with the optimizer, and it often happens in bizarre ways like this.
Do you know of any book or other reference to this (specifically how
the optimizer needs a key)? It would really help me convince everyone
to use keys on everything.
Thanks!
dmarkle wrote:
I suspect the reason (...AND ... IS NOT NULL) is helping you out is
that it's forcing the optimizer to choose a table scan instead if
hitting up that index you made on email.
Before you go changing minutiae like ANSI_NULLS, do realize that your
tables are, (relationally speaking), nonsense. Any table without a
PRIMARY KEY or otherwise unique key is likely going to cause problems
with the optimizer, and it often happens in bizarre ways like this.
You should do the following:
1) Make the clustered index on "bk" UNIQUE. If you can't do that...
2) Add email to your clustered index as a secondary key and recreate
the clustered index as UNIQUE, or make the combination of the two
columns a PRIMARY KEY. If you can't do that...
3) Analyze your data and fix your process to not allow wholly duplicate
rows, which are nonsensical. If you can't do THAT (ugh):
4) Add an IDENTITY column to your table. So as not to drastically
increase your index sizes, you could either add this column as a
secondary key after bk and make the clustered index UNIQUE, or you
could create a nonclustered PRIMARY KEY on that identity.
Let me know how it goes.
-Dave Markle http://www.markleconsulting.com
Erland Sommarskog wrote:
Dave (da******@gmail .com) writes:
Erland that was an excellent post! However it was not my problem. The
code is run from .NET application and I am testing using SQL Management
Studio.
I added this to the test script and it did not help.
SET ANSI_NULLS ON
>
As long it's a loose script, ANSI_BULLS would be on by default. I will
have to admit that I was clutching a straws. Without access to the
database seeing it, it is very difficult to analyse the problem accurately.
It's not alwyas that easy even if you have full acecss.
>
>
>
--
Erland Sommarskog, SQL Server MVP, es****@sommarsk og.se
>
Books Online for SQL Server 2005 at http://www.microsoft.com/technet/pro...ads/books.mspx
Books Online for SQL Server 2000 at http://www.microsoft.com/sql/prodinf...ons/books.mspx
Thanks for the explanation.
I copied the tables to a test db. I created an identity column on each
table and created a unique clustered in the identity columns. I then
created a non-clustered on bk and a non clustered on email.
This did not help the query performance.
I will keep checking back here to see if anyone else has any
suggestions on how to trouble shoot this. It is no longer a critical
issue due to the work around. I just don't like the fact that we can
not explain this behavior.
dmarkle wrote:
Well, it comes mostly from relational and set theory. I like to
explain the need for PK's like this:
If your wife makes you go to the grocery store and she asks you what
you bought, you'd give a set back as your answer:
"Peas, carrots, potatoes, and Coca-Cola"
You wouldn't say:
"Peas, carrots, potatoes, peas, and Coca-Cola".
The two answers aren't equivalent. One simply makes sense, and one
does not. For example, in set theory, you can express a set of numbers
as:
{1,2,3,4}
but not:
{1,2,2,3,4}. The second representation is invalid.
Like giving your grocery list or giving a set of numbers, the engine's
theoretical foundation is based on relational theory, which is closely
tied to set theory.
If you want a book with a recommendation, "Inside SQL Server" as well
as a lot of other DB books out there will tell you the same thing --
always have a PK on each table you create.
I'm sorry, I'll admit that I don't have the hard evidence to prove to
you that not having a PK is going to screw up the optimizer. I do try
to stay away from "cargo cult" type "I saw this once..." sort of
recommendations , but in this case, since not having a PK on each table
is such a no-no, I'd try this first. I like #2 best. If "email" is
distinct in the table by itself, that's going to be even better...
Maybe copy your data over to a development database and try these out
and see how it goes.
Good luck.
-Dave
Dave wrote:
I like your explanation!
The table does not currently have a primary key.
I basically have options 2 and 4 available; 1 and 3 are out of my
control.
I will lobby for adding an identity column to this table. It happens
to be very wide with 12 single column indexes so I really don't want
to make the clustered super wide.
Just for clarities sake, in order for the optimizer to properly
understand the tables, do I have to add a PK constraint to the identity
column?
>Any table without a
PRIMARY KEY or otherwise unique key is likely going to cause problems
with the optimizer, and it often happens in bizarre ways like this.
Do you know of any book or other reference to this (specifically how
the optimizer needs a key)? It would really help me convince everyone
to use keys on everything.
Thanks!
dmarkle wrote:
I suspect the reason (...AND ... IS NOT NULL) is helping you out is
that it's forcing the optimizer to choose a table scan instead if
hitting up that index you made on email.
>
Before you go changing minutiae like ANSI_NULLS, do realize that your
tables are, (relationally speaking), nonsense. Any table without a
PRIMARY KEY or otherwise unique key is likely going to cause problems
with the optimizer, and it often happens in bizarre ways like this.
>
You should do the following:
1) Make the clustered index on "bk" UNIQUE. If you can't do that...
>
2) Add email to your clustered index as a secondary key and recreate
the clustered index as UNIQUE, or make the combination of the two
columns a PRIMARY KEY. If you can't do that...
>
3) Analyze your data and fix your process to not allow wholly duplicate
rows, which are nonsensical. If you can't do THAT (ugh):
>
4) Add an IDENTITY column to your table. So as not to drastically
increase your index sizes, you could either add this column as a
secondary key after bk and make the clustered index UNIQUE, or you
could create a nonclustered PRIMARY KEY on that identity.
>
Let me know how it goes.
-Dave Markle http://www.markleconsulting.com
>
>
>
>
>
Erland Sommarskog wrote:
Dave (da******@gmail .com) writes:
Erland that was an excellent post! However it was not my problem. The
code is run from .NET application and I am testing using SQL Management
Studio.
>
I added this to the test script and it did not help.
SET ANSI_NULLS ON
As long it's a loose script, ANSI_BULLS would be on by default. I will
have to admit that I was clutching a straws. Without access to the
database seeing it, it is very difficult to analyse the problem accurately.
It's not alwyas that easy even if you have full acecss.
--
Erland Sommarskog, SQL Server MVP, es****@sommarsk og.se
Books Online for SQL Server 2005 at http://www.microsoft.com/technet/pro...ads/books.mspx
Books Online for SQL Server 2000 at http://www.microsoft.com/sql/prodinf...ons/books.mspx
Dave (da******@gmail .com) writes:
Do you know of any book or other reference to this (specifically how
the optimizer needs a key)? It would really help me convince everyone
to use keys on everything.
It's not really a matter of the optimizer needing a key. Adding an
IDENTITY column is not going to change this particular problem. It's
more about database design in general. Every table in a database should
have a primary key. Preferrably this should be a natural key, but this
is not always possible. In a relational database, you access data through
data, that is the primary key.
By defining good primary keys, you can avoid duplicate data. With further
normalisation, there are more anomalies you can avoid.
--
Erland Sommarskog, SQL Server MVP, es****@sommarsk og.se
Books Online for SQL Server 2005 at http://www.microsoft.com/technet/pro...ads/books.mspx
Books Online for SQL Server 2000 at http://www.microsoft.com/sql/prodinf...ons/books.mspx
Interesting problem. It seems that the optimizer likes the "not null"
hint and is treating it differently than other predicates. I see you
do have a lot of nulls in your database.
I've run into a number of situations with minimally indexed tables,
where adding additional indexes slows down execution (of selects).
This is always weird, since one would hope the optimizer could judge
and ignore the index in those cases!
The size of your data can count, too. In fact, I'm working on
something along those lines these days myself. Once the optimizer
realizes your data is a lot larger than your RAM, it has to switch to
multipass sorts and merges which are MUCH slower. I'm not sure these
even show on the exection plans, preview nor afterwards. Maybe it
shows on set statistics profile.
I don't have any specific suggestions. If you kept the null-valued
records in a separate table, ... something along those lines, is
somewhat extreme but might help out, except you have your
"work-around", so what the heck! Maybe SQL2005 partitioned tables ...
Josh
On 8 Jan 2007 13:00:17 -0800, "Dave" <da******@gmail .comwrote:
>Thanks for the explanation.
I copied the tables to a test db. I created an identity column on each table and created a unique clustered in the identity columns. I then created a non-clustered on bk and a non clustered on email.
This did not help the query performance.
I will keep checking back here to see if anyone else has any suggestions on how to trouble shoot this. It is no longer a critical issue due to the work around. I just don't like the fact that we can not explain this behavior.
dmarkle wrote:
>Well, it comes mostly from relational and set theory. I like to explain the need for PK's like this:
If your wife makes you go to the grocery store and she asks you what you bought, you'd give a set back as your answer:
"Peas, carrots, potatoes, and Coca-Cola"
You wouldn't say:
"Peas, carrots, potatoes, peas, and Coca-Cola".
The two answers aren't equivalent. One simply makes sense, and one does not. For example, in set theory, you can express a set of numbers as:
{1,2,3,4}
but not:
{1,2,2,3,4}. The second representation is invalid.
Like giving your grocery list or giving a set of numbers, the engine's theoretical foundation is based on relational theory, which is closely tied to set theory.
If you want a book with a recommendation, "Inside SQL Server" as well as a lot of other DB books out there will tell you the same thing -- always have a PK on each table you create.
I'm sorry, I'll admit that I don't have the hard evidence to prove to you that not having a PK is going to screw up the optimizer. I do try to stay away from "cargo cult" type "I saw this once..." sort of recommendation s, but in this case, since not having a PK on each table is such a no-no, I'd try this first. I like #2 best. If "email" is distinct in the table by itself, that's going to be even better... Maybe copy your data over to a development database and try these out and see how it goes.
Good luck. -Dave
Dave wrote:
I like your explanation!
The table does not currently have a primary key.
I basically have options 2 and 4 available; 1 and 3 are out of my
control.
I will lobby for adding an identity column to this table. It happens
to be very wide with 12 single column indexes so I really don't want
to make the clustered super wide.
Just for clarities sake, in order for the optimizer to properly
understand the tables, do I have to add a PK constraint to the identity
column?
>Any table without a
PRIMARY KEY or otherwise unique key is likely going to cause problems
with the optimizer, and it often happens in bizarre ways like this.
Do you know of any book or other reference to this (specifically how
the optimizer needs a key)? It would really help me convince everyone
to use keys on everything.
Thanks!
dmarkle wrote:
I suspect the reason (...AND ... IS NOT NULL) is helping you out is
that it's forcing the optimizer to choose a table scan instead if
hitting up that index you made on email.
Before you go changing minutiae like ANSI_NULLS, do realize that your
tables are, (relationally speaking), nonsense. Any table without a
PRIMARY KEY or otherwise unique key is likely going to cause problems
with the optimizer, and it often happens in bizarre ways like this.
You should do the following:
1) Make the clustered index on "bk" UNIQUE. If you can't do that...
2) Add email to your clustered index as a secondary key and recreate
the clustered index as UNIQUE, or make the combination of the two
columns a PRIMARY KEY. If you can't do that...
3) Analyze your data and fix your process to not allow wholly duplicate
rows, which are nonsensical. If you can't do THAT (ugh):
4) Add an IDENTITY column to your table. So as not to drastically
increase your index sizes, you could either add this column as a
secondary key after bk and make the clustered index UNIQUE, or you
could create a nonclustered PRIMARY KEY on that identity.
Let me know how it goes.
-Dave Markle http://www.markleconsulting.com
Erland Sommarskog wrote:
Dave (da******@gmail .com) writes:
Erland that was an excellent post! However it was not my problem. The
code is run from .NET application and I am testing using SQL Management
Studio.
>
I added this to the test script and it did not help.
SET ANSI_NULLS ON
As long it's a loose script, ANSI_BULLS would be on by default. I will
have to admit that I was clutching a straws. Without access to the
database seeing it, it is very difficult to analyse the problem accurately.
It's not alwyas that easy even if you have full acecss.
--
Erland Sommarskog, SQL Server MVP, es****@sommarsk og.se
Books Online for SQL Server 2005 at http://www.microsoft.com/technet/pro...ads/books.mspx
Books Online for SQL Server 2000 at http://www.microsoft.com/sql/prodinf...ons/books.mspx
Hi,
I have modified your query a little bit and hope this will help.
DDL
CREATE TABLE base (bk char(25), email varchar(100))
create clustered index ix_email on base(email)
create index icx on base(bk)
CREATE TABLE filter (bk char(25), email varchar(100))
create clustered index ix_email on filter (email)
create index icx on filter (bk)
Query
SELECT b.bk, b.email
FROM base b WITH(NOLOCK)
INNER JOIN filter f ON f.email = b.email
--and f.email is not null
Dave wrote:
Thanks for the explanation.
I copied the tables to a test db. I created an identity column on each
table and created a unique clustered in the identity columns. I then
created a non-clustered on bk and a non clustered on email.
This did not help the query performance.
I will keep checking back here to see if anyone else has any
suggestions on how to trouble shoot this. It is no longer a critical
issue due to the work around. I just don't like the fact that we can
not explain this behavior.
dmarkle wrote:
Well, it comes mostly from relational and set theory. I like to
explain the need for PK's like this:
If your wife makes you go to the grocery store and she asks you what
you bought, you'd give a set back as your answer:
"Peas, carrots, potatoes, and Coca-Cola"
You wouldn't say:
"Peas, carrots, potatoes, peas, and Coca-Cola".
On Fri, 5 Jan 2007 23:18:30 +0000 (UTC), Erland Sommarskog wrote:
ANSI_BULLS
Hi Erland,
Is that your secret nickname for Joe Celko? <g>
--
Hugo Kornelis, SQL Server MVP
My SQL Server blog: http://sqlblog.com/blogs/hugo_kornelis This thread has been closed and replies have been disabled. Please start a new discussion. Similar topics |
by: Prem |
last post by:
Hi,
I am having many problems with inner join. my first problem is :
1) I want to know the precedance while evaluating query with multiple
joins.
eg.
select Employees.FirstName, Employees.LastName, TerritoryID,
Employees.EmployeeID,
RegionID, ProductID
from Employees
|
by: DCM Fan |
last post by:
{CREATE TABLEs and INSERTs follow...}
Gents,
I have a main table that is in ONE-MANY with many other tables. For example, if
the main table is named A, there are these realtionships:
A-->B
A-->C
A-->D
|
by: news-east.earthlink.net |
last post by:
The scenario:
two tables
CustomerTable
---------------
CustomerID
OrderID
CustomerName
CustomerEmail
|
by: Thomas Beutin |
last post by:
Hi,
i've a speed problem withe the following statement:
SELECT DISTINCT pz.l1_id, pz.l2_id, pz.l3_id, pz.l4_id
FROM ot_adresse AS a, ot_produkt AS p
LEFT OUTER JOIN ot_kat_prod AS pz ON ( p.p_id = pz.p_id )
WHERE p.a_id = a.id AND a.id = '105391105424941' AND a.m_id = '37';
This is terrible slow compared to the inner join:
SELECT DISTINCT pz.l1_id, pz.l2_id, pz.l3_id, pz.l4_id
|
by: PASQUALE |
last post by:
Hi
I have a question:
do the both statements below give the same result?
If yes then does somebody know something about preformance differencies
using these joins?
SELECT A.*
FROM Table1 A
INNER JOIN Table2 B on A.Field1 <> B.Field1
| |
by: gkellymail |
last post by:
the following query works fine:
select link.idx, link.x_table, link.x_id_a, link.x_id_z, a.strandid,
b.strandid
from link_detail, link, strand A, strand B
where link_detail.x_table = 'enclosure' and link_detail.x_id = 3 and
link.idx = link_detail.linkidx
and A.strandid = link.x_id_a and B.strandid = link.x_id_z
would someone please convert this to a more efficient query using inner
|
by: moldster |
last post by:
Hi,
I'm at my wits end! I have two large tables one with 1.2mill one with 2.3 mill and they are very wide tables. I have a select with an inner join. All columns used in the join are contained in indexes. But it does an index scan and a massive hashmatch. Why is this? both tables have columns in the index ordered the same, all datatypes of the indexes are ints. The code looks like this.
SELECT TblActivities.*
FROM ...
|
by: dmonroe |
last post by:
hi group --
Im having a nested inner join problem with an Access SQl
statement/Query design. Im running the query from ASP and not usng the
access interface at all. Here's the tables:
tblEmployees
empId -- EmpName -- EmpRole -- EmpManager
-------....------------.... ---------....---------------
1........ dan yella..........1..........2
|
by: MP |
last post by:
Hi
trying to begin to learn database using vb6, ado/adox, mdb format, sql
(not using access...just mdb format via ado)
i need to group the values of multiple fields
- get their possible variations(combination of fields),
- then act on each group in some way ...eg ProcessRs (oRs as RecordSet)...
the following query will get me the distinct groups
|
by: Oralloy |
last post by:
Hello folks,
I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>".
The problem is that using the GNU compilers, it seems that the internal comparison operator "<=>" tries to promote arguments from unsigned to signed.
This is as boiled down as I can make it.
Here is my compilation command:
g++-12 -std=c++20 -Wnarrowing bit_field.cpp
Here is the code in...
|
by: jinu1996 |
last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven tapestry of website design and digital marketing. It's not merely about having a website; it's about crafting an immersive digital experience that captivates audiences and drives business growth.
The Art of Business Website Design
Your website is...
| |
by: agi2029 |
last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing, and deployment—without human intervention. Imagine an AI that can take a project description, break it down, write the code, debug it, and then launch it, all on its own....
Now, this would greatly impact the work of software developers. The idea...
|
by: isladogs |
last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM).
In this session, we are pleased to welcome a new presenter, Adolph Dupré who will be discussing some powerful techniques for using class modules.
He will explain when you may want to use classes instead of User Defined Types (UDT). For example, to manage the data in unbound forms.
Adolph will...
|
by: conductexam |
last post by:
I have .net C# application in which I am extracting data from word file and save it in database particularly. To store word all data as it is I am converting the whole word file firstly in HTML and then checking html paragraph one by one.
At the time of converting from word file to html my equations which are in the word document file was convert into image.
Globals.ThisAddIn.Application.ActiveDocument.Select();...
|
by: TSSRALBI |
last post by:
Hello
I'm a network technician in training and I need your help.
I am currently learning how to create and manage the different types of VPNs and I have a question about LAN-to-LAN VPNs.
The last exercise I practiced was to create a LAN-to-LAN VPN between two Pfsense firewalls, by using IPSEC protocols.
I succeeded, with both firewalls in the same network. But I'm wondering if it's possible to do the same thing, with 2 Pfsense firewalls...
|
by: adsilva |
last post by:
A Windows Forms form does not have the event Unload, like VB6. What one acts like?
|
by: muto222 |
last post by:
How can i add a mobile payment intergratation into php mysql website.
| |
by: bsmnconsultancy |
last post by:
In today's digital era, a well-designed website is crucial for businesses looking to succeed. Whether you're a small business owner or a large corporation in Toronto, having a strong online presence can significantly impact your brand's success. BSMN Consultancy, a leader in Website Development in Toronto offers valuable insights into creating effective websites that not only look great but also perform exceptionally well. In this comprehensive...
| |