473,657 Members | 2,756 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

Conditional summaring

Hi all
I'm trying to create a summery table of rain data that holds a record
for every 0.1mm of rain in the following format:

Station Name, Time, Value
A, 2002-12-03 14:44:41.000, 0.1
A, 2002-12-03 14:45:45.000, 0.1
A, 2002-12-03 14:49:45.000, 0.1
A, 2002-12-09 05:30:35.000, 0.1
A, 2002-12-09 05:30:37.000, 0.1
A, 2002-12-09 05:33:05.000, 0.1
B, 2002-12-09 05:32:47.000, 0.1
B, 2002-12-09 05:33:00.000, 0.1
B, 2002-12-09 05:35:00.000, 0.1
B, 2002-12-09 05:37:48.000, 0.1
....
B, 2003-02-09 01:32:47.000, 0.1
B, 2003-02-09 05:32:47.000, 0.1

The problem is that I need to count rain events for each station and
produce the starting and ending time of the event (based on the Time
field) and to summarize the amount of rain that fell during the event.
An "event" for that matter is a case where there is a gap of at least
12 hours with no rain before and after a series of rain measurements
(i.e., a rain storm). The data in the table is written in such a way
that each row hold the time where an amount on 0.1mm of rain was
counted by the rain gauge.

For example, the above date should produce:

Station Name, Start Time, End Time, Total rain
A, 2002-12-03 14:44:41.000, 2002-12-03 14:49:45.000, 0.3
A, 2002-12-09 05:30:35.000, 2002-12-09 05:33:05.000, 0.3
B, 2002-12-09 05:32:47.000, 2002-12-09 05:37:48.000, 0.3
B, 2003-02-09 01:32:47.000, 2003-02-09 05:32:47.000, 0.2

As a newbie to SQL I don't know if it can be done.

Many thanks,
Ilik

Jul 23 '05 #1
10 2013
On 14 Feb 2005 08:00:24 -0800, Ilik wrote:
Hi all
I'm trying to create a summery table of rain data that holds a record
for every 0.1mm of rain in the following format:

Station Name, Time, Value
A, 2002-12-03 14:44:41.000, 0.1
A, 2002-12-03 14:45:45.000, 0.1
A, 2002-12-03 14:49:45.000, 0.1
A, 2002-12-09 05:30:35.000, 0.1
A, 2002-12-09 05:30:37.000, 0.1
A, 2002-12-09 05:33:05.000, 0.1
B, 2002-12-09 05:32:47.000, 0.1
B, 2002-12-09 05:33:00.000, 0.1
B, 2002-12-09 05:35:00.000, 0.1
B, 2002-12-09 05:37:48.000, 0.1
...
B, 2003-02-09 01:32:47.000, 0.1
B, 2003-02-09 05:32:47.000, 0.1

The problem is that I need to count rain events for each station and
produce the starting and ending time of the event (based on the Time
field) and to summarize the amount of rain that fell during the event.
An "event" for that matter is a case where there is a gap of at least
12 hours with no rain before and after a series of rain measurements
(i.e., a rain storm). The data in the table is written in such a way
that each row hold the time where an amount on 0.1mm of rain was
counted by the rain gauge.

For example, the above date should produce:

Station Name, Start Time, End Time, Total rain
A, 2002-12-03 14:44:41.000, 2002-12-03 14:49:45.000, 0.3
A, 2002-12-09 05:30:35.000, 2002-12-09 05:33:05.000, 0.3
B, 2002-12-09 05:32:47.000, 2002-12-09 05:37:48.000, 0.3
B, 2003-02-09 01:32:47.000, 2003-02-09 05:32:47.000, 0.2

As a newbie to SQL I don't know if it can be done.


Hi Ilik,

It can be done, but it's quite complicated. You have to use several
tricks, then combine them into one query.

Trick 1: Find the start of a rain period. Each row for which no other row
exists with a time value in the preceding 12 hours marks the start of a
rain period. You can find these using NOT EXISTS or using an OUTER JOIN.
SELECT Station
, Time
FROM MyTable AS a
WHERE NOT EXISTS (SELECT *
FROM MyTable AS b
WHERE b.Station = a.Station
AND b.Time < a.Time
AND b.Time >= DATEADD(hour, -12, a.Time))
or
SELECT a.Station
, a.Time
FROM MyTable AS a
LEFT OUTER JOIN MyTable AS b
ON b.Station = a.Station
AND b.Time < a.Time
AND b.Time >= DATEADD(hour, -12, a.Time)
I'll use the NOT EXISTS version in the remainder of this message; if
performance is important for you, I'd advise you to test both versions.

Trick 2: Find the end of a rain period. This is basically the same
technique as trick 1.

Trick 3: Match up the start and end times of each rain period. For each
start, the matching end is the FIRST of all end's that occur AFTER that
start.
SELECT a.Station
, a.Time AS StartTime
, MIN(b.Time) AS EndTime
FROM MyTable AS a
INNER JOIN MyTable AS b
ON b.Station = a.Station
AND b.Time > a.Time
WHERE NOT EXISTS (SELECT *
FROM MyTable AS c
WHERE c.Station = a.Station
AND c.Time < a.Time
AND c.Time >= DATEADD(hour, -12, a.Time))
AND NOT EXISTS (SELECT *
FROM MyTable AS d
WHERE d.Station = b.Station
AND d.Time > b.Time
AND d.Time <= DATEADD(hour, +12, b.Time))
GROUP BY a.Station
, a.Time

Trick 4: Find all rows between start and end time of a rain period. This
can be done by using the result of the previous step as a derived table
and joining that to the original table, but in this case, I decided to
extend the result of the previous step with one correlated subquery.
SELECT a.Station
, a.Time AS StartTime
, MIN(b.Time) AS EndTime
,(SELECT SUM(e.Value)
FROM MyTable AS e
WHERE e.Station = a.Station
AND e.Time >= a.Time
AND e.Time <= MIN(b.Time)) AS TotalRain
FROM MyTable AS a
INNER JOIN MyTable AS b
ON b.Station = a.Station
AND b.Time > a.Time
WHERE NOT EXISTS (SELECT *
FROM MyTable AS c
WHERE c.Station = a.Station
AND c.Time < a.Time
AND c.Time >= DATEADD(hour, -12, a.Time))
AND NOT EXISTS (SELECT *
FROM MyTable AS d
WHERE d.Station = b.Station
AND d.Time > b.Time
AND d.Time <= DATEADD(hour, +12, b.Time))
GROUP BY a.Station
, a.Time

Note: all the queries above are untested. To get tested replies, you need
to post SQL that will recreate your tables and data on my system, as shown
in www.aspfaq.com/5006.

Best, Hugo
--

(Remove _NO_ and _SPAM_ to get my e-mail address)
Jul 23 '05 #2
[posted and mailed, please reply in news]

Ilik (le***@walla.co .il) writes:
The problem is that I need to count rain events for each station and
produce the starting and ending time of the event (based on the Time
field) and to summarize the amount of rain that fell during the event.
An "event" for that matter is a case where there is a gap of at least
12 hours with no rain before and after a series of rain measurements
(i.e., a rain storm). The data in the table is written in such a way
that each row hold the time where an amount on 0.1mm of rain was
counted by the rain gauge.


By no means a trivial problem. As Hugo said, CREATE TABLE and INSERT
statements are alwyas a good thing. Then again, the table was easy
to compose in this case, as were the INSERT statement. And your
narrative and sample data, was very good. Thanks!

Here is a multi-step approach. Maybe not the most elegant, but it
gives the correct result. If data volumes are large, performance
could be an issue, though.

A particular note on the datediff exercise. I do datediff per minute,
not per hour. This is because datediff(HOUR, '00:15:00', '12:10:10')
is 12, although it's not fully 12 hours. datediff always counts
cross boundaries. Also, I found that coalesce needed special care.
Originally I had 19000101, and 99991231, but that gave me overflow.

CREATE TABLE data(station char(1) NOT NULL,
time datetime NOT NULL,
value float NOT NULL,
CONSTRAINT pk_data PRIMARY KEY (station, time))
go
INSERT data (station, time, value)
SELECT 'A', '20021203 14:44:41.000', 0.1 UNION
SELECT 'A', '20021203 14:45:45.000', 0.1 UNION
SELECT 'A', '20021203 14:49:45.000', 0.1 UNION
SELECT 'A', '20021209 05:30:35.000', 0.1 UNION
SELECT 'A', '20021209 05:30:37.000', 0.1 UNION
SELECT 'A', '20021209 05:33:05.000', 0.1 UNION
SELECT 'B', '20021209 05:32:47.000', 0.1 UNION
SELECT 'B', '20021209 05:33:00.000', 0.1 UNION
SELECT 'B', '20021209 05:35:00.000', 0.1 UNION
SELECT 'B', '20021209 05:37:48.000', 0.1 UNION
SELECT 'B', '20030209 01:32:47.000', 0.1 UNION
SELECT 'B', '20030209 05:32:47.000', 0.1 UNION
SELECT 'B', '20030209 18:32:47.000', 0.1 UNION
SELECT 'B', '20030212 05:32:47.000', 0.1
go
CREATE TABLE #temp (station char(1) NOT NULL,
time datetime NOT NULL,
n int NOT NULL,
starttime datetime NULL,
endtime datetime NULL,
value float NOT NULL,
PRIMARY KEY (station, n))
go
INSERT #temp (station, time, value, n)
SELECT a.station, a.time, a.value,
(SELECT COUNT(*) + 1 FROM data b
WHERE a.station = b.station AND a.time > b.time)
FROM data a
go
UPDATE b
SET starttime = CASE WHEN datediff(minute ,
coalesce(a.time , dateadd(DAY, -1, b.time)),
b.time) > 60*12
THEN b.time
END,
endtime = CASE WHEN datediff(minute ,
b.time,
coalesce(c.time , dateadd(DAY, 1, b.time))) > 60*12
THEN b.time
END
FROM #temp b
LEFT JOIN #temp a ON a.station = b.station AND b.n - 1 = a.n
LEFT JOIN #temp c ON b.station = c.station AND b.n + 1 = c.n
go
UPDATE #temp
SET starttime = (SELECT MAX(b.starttime )
FROM #temp b
WHERE b.station = a.station
AND b.starttime < a.time)
FROM #temp a
WHERE a.starttime IS NULL
go
SELECT station, starttime, MAX(endtime), SUM(value)
FROM #temp
GROUP BY station, starttime
go
DROP TABLE #temp
DROP TABLE data
--
Erland Sommarskog, SQL Server MVP, es****@sommarsk og.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techinf...2000/books.asp
Jul 23 '05 #3
On Tue, 15 Feb 2005 21:28:13 +0000 (UTC), Erland Sommarskog wrote:
By no means a trivial problem. As Hugo said, CREATE TABLE and INSERT
statements are alwyas a good thing. Then again, the table was easy
to compose in this case, as were the INSERT statement.
Hi Erland,

You're absolutely right. I guess this just shows that I'm lazy ;-)

I just stole your work to test my query and found one small error. The
corrected version (adapted to the table and column names you've chosen) is

SELECT a.station
, a.time AS Starttime
, MIN(b.time) AS Endtime
,(SELECT SUM(e.value)
FROM data AS e
WHERE e.station = a.station
AND e.time >= a.time
AND e.time <= MIN(b.time)) AS TotalRain
FROM data AS a
INNER JOIN data AS b
ON b.station = a.station
AND b.time >= a.time -- The change is in this line
WHERE NOT EXISTS (SELECT *
FROM data AS c
WHERE c.station = a.station
AND c.time < a.time
AND c.time >= DATEADD(hour, -12, a.time))
AND NOT EXISTS (SELECT *
FROM data AS d
WHERE d.station = b.station
AND d.time > b.time
AND d.time <= DATEADD(hour, +12, b.time))
GROUP BY a.station
, a.time
And your
narrative and sample data, was very good. Thanks!
I can only second that.

Here is a multi-step approach. Maybe not the most elegant, but it
gives the correct result. If data volumes are large, performance
could be an issue, though.
I ran some quick tests for a first impression. Based on the difference
from start to end time for both queries, using the sample data provided,
it was too close to call. Increasing the amount of test data (copying the
insert two more times and changing the station names to C, D, E and F)
resulted in a small advantage for my query, but with so little test data,
that is not really saying anything.

With set statistics io on, my query used

Table 'data'. Scan count 151, logical reads 302, physical reads 0,
read-ahead reads 0.

And your code used

Table '#temp__(snip)' . Scan count 0, logical reads 86, physical reads 0,
read-ahead reads 0.
Table 'data'. Scan count 43, logical reads 86, physical reads 0,
read-ahead reads 0.

Table '#temp__(snip)' . Scan count 85, logical reads 254, physical reads 0,
read-ahead reads 0.
Table 'Worktable'. Scan count 1, logical reads 44, physical reads 0,
read-ahead reads 0.

Table '#temp__(snip)' . Scan count 25, logical reads 98, physical reads 0,
read-ahead reads 0.
Table 'Worktable'. Scan count 1, logical reads 26, physical reads 0,
read-ahead reads 0.

Table '#temp__(snip)' . Scan count 1, logical reads 2, physical reads 0,
read-ahead reads 0.

Adding up these numbers shows that your version appears to do quite some
more work than my version.

I'd be interested to hear from Ilik how the versions compare on his
system, with real data.

A particular note on the datediff exercise. I do datediff per minute,
not per hour. This is because datediff(HOUR, '00:15:00', '12:10:10')
is 12, although it's not fully 12 hours. datediff always counts
cross boundaries. Also, I found that coalesce needed special care.
Originally I had 19000101, and 99991231, but that gave me overflow.


Why didn't you use DATEADD(hour, 12, a.time) > b.time, then? Wouldn't that
have solved all these issues?

Best, Hugo
--

(Remove _NO_ and _SPAM_ to get my e-mail address)
Jul 23 '05 #4
I have to walk two dogs and install a Pergo floor this morning, so I
cannot work out the code in detail.

Can I assume that the measurements are collected by device that is
giving you a collection time? This means the data comes over time, and
it is a few microseconds behind the current time.

Instead of modeling the data collection form, model the final results:

CREATE TABLE Rainfall
(station_name CHAR(5) NOT NULL,
start_time DATETIME NOT NULL,
end_time DATETIME DEFAULT CURRENT_TIMESTA MP NOT NULL,
rain_accumulati on DECIMAL (5,2) DEFAULT 0.00 NOT NULL,
CHECK (start_time < end_time),
PRIMARY KEY (station_name, start_time));

When a new reading comes in:

1) If it is within 12 hours of the start_time of the current row (i.e.
MAX(end_time) , then add 0.1 to the rain_accumulati on and the end_time
CURRENT_TIMESTA MP in an UPDATE.

2) If it is more than 12 hours after the start_time of the current row,
then

2.1) update end_time to CURRENT_TIMESTA MP to close out the current
period.
2.2) insert new row with the new start_time, etc.

Something like this in pseudo-code.

CREATE PROCEDURE PostNewReading( @my_station_nam e, @new_reading_ti me)
AS
BEGIN
UPDATE Rainfall
SET end_time
= CASE WHEN <<less than 12 hours
FROM most recent>>
THEN @new_reading_ti me
ELSE end_time END,
rain_accumulati on
= CASE WHEN <<less than 12 hours
FROM most recent>>
THEN rain_accumulati on + 0.1
ELSE rain_accumulati on END;

INSERT INTO Rainfall
SELECT @my_station_nam e, @reading_time,
CURRENT_TIMESTA MP, 0.1
FROM Rainfall
WHERE <<reading more than 12 hours
FROM most recent>>;
END;

The idea is to put the procedural logic into CASE expressions in an
UPDATE and an INSERT statement. Use ELSE clauses to leave data
unchanged. No proprietary T-SQL control of flow stuff at all. No
tricky self-joins or cursors.

Jul 23 '05 #5
Hi all
First, my sincere apologies for not replying earlier, but I was busy
over my head in the last couple of days with other projects.
In reply to CELKO, you're assuming that the data is being collected at
real time, but this is not the case. The data goes back a few decades
in some cases (starting at 1965), so I cannot evaluate it on entry.

As for Erland's solution, I run it only for a few minutes because of
his remark that performance can be an issue there. I tried running the
Hugo's query with the "NOT EXISTS" option. It run for two hours before
I stopped it... The second option ("LEFT OUTER JOIN ") I stopped after
15min .....
The table holding the data has over 1.3 million records, that besides
rain gauge data also has other data types (e.g., water conductivity,
water level, water chemistry etc.) in the "Class_Name " column. I didn't
mention this in my original post because I thought I could deal with it
after I'll get your help with the basic query, but now I see it is much
more complex then I thought. I added this to Hugo's query as you can
see below. So the efficiency of the query is an important issue here.
If you have any ideas on how to make it run faster?

Here is the query as I run it with both options:

--START
SELECT a.Location_Name
, a.Time AS StartTime
, MIN(b.Time) AS EndTime
,(SELECT SUM(e.Value)
FROM V_Cell AS e
WHERE e.Location_Name =
a.Location_Name
AND e.Class_Name = 'Rain'
AND e.Time >= a.Time
AND e.Time <= MIN(b.Time)) AS
TotalRain
FROM V_Cell AS a
INNER JOIN V_Cell AS b
ON b.Location_Name = a.Location_Name
AND b.Class_Name = 'Rain'
AND b.Time >= a.Time
-- WHERE NOT EXISTS (SELECT *
-- FROM V_Cell AS c
-- WHERE c.Location_Name =
a.Location_Name
LEFT OUTER JOIN V_Cell AS c
ON c.Location_Name = a.Location_Name
AND c.Class_Name = 'Rain'
AND c.Time < a.Time
AND c.Time >= DATEADD(hour, -12, a.Time)
-- AND NOT EXISTS (SELECT *
-- FROM V_Cell AS d
-- WHERE d.Location_Name =
b.Location_Name
LEFT OUTER JOIN V_Cell AS d
ON d.Location_Name = b.Location_Name
AND d.Class_Name = 'Rain'
AND d.Time > b.Time
AND d.Time <= DATEADD(hour, +12, b.Time)
GROUP BY a.Location_Name
, a.Time

--END

Many thanks,
Ilik

Jul 23 '05 #6
On 17 Feb 2005 01:06:22 -0800, Ilik wrote:

(snip)
If you have any ideas on how to make it run faster?


Hi Ilik,

I don't think there's much room for simplification of the query, so I'd
look into indexes. I don't expect my query to return results in a fraction
of a second, but two hours appears to be too much. I'm afraid that you
have no indexes on your table that can be used for this query.

In order for us to advise on this, we need to know the actual table
structure (as CREATE TABLE statement, **INCLUDING** all constraints [esp
primary key and unique!]) and all extra indexes you might have defined.
You might wish to run sp_help V_Cell to get a full report on all indexes
and constraints.

Another thing you could do is to run the index tuning wizard (ITW): load
my query in a QA window, select "Query" / "Index Tuning Wizard" and see
what it advises you. ITW is certainly not always perfect, but it never
hurts to check out what it comes up with.

Best, Hugo
--

(Remove _NO_ and _SPAM_ to get my e-mail address)
Jul 23 '05 #7
Ilik (le***@walla.co .il) writes:
As for Erland's solution, I run it only for a few minutes because of
his remark that performance can be an issue there. I tried running the
Hugo's query with the "NOT EXISTS" option. It run for two hours before
I stopped it... The second option ("LEFT OUTER JOIN ") I stopped after
15min .....
The table holding the data has over 1.3 million records, that besides
rain gauge data also has other data types (e.g., water conductivity,
water level, water chemistry etc.) in the "Class_Name " column. I didn't
mention this in my original post because I thought I could deal with it
after I'll get your help with the basic query, but now I see it is much
more complex then I thought. I added this to Hugo's query as you can
see below. So the efficiency of the query is an important issue here.
If you have any ideas on how to make it run faster?


Your data volumes are certainly in par with what I expected. (No one
would have a table like that with only a few thousands of rows!) But
without test data available, it's difficult to post a solution that
tested for performance.

For my query batch, it could be an idea to let it run for a little
longer, and see what actually takes time.

Certainly there is all reason to review the indexing of the table, and
as Hugo said, posting the complete CREATE TABLE and CREATE INDEX
statement for the table may help.

One solution that cannot be ruled out completely, is to run a cursor
over the lot. That is by no means not going to be blinding fast, but
you can quite soon get a grip of when it will complete. (As you would
be able to see how much have been processed.) But that would really be
a last-ditch attempt if nothing else works.

--
Erland Sommarskog, SQL Server MVP, es****@sommarsk og.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techinf...2000/books.asp
Jul 23 '05 #8
>> The data goes back a few decades in some cases (starting at 1965),
so I cannot evaluate it on entry .. The table holding the data has over
1.3 million records, that besides rain gauge data also has other data
types (e.g., water conductivity, water level, water chemistry etc.) in
the "Class_Name " column. <<

Ouch! I think is a case where I would go with a cursor, since it
sounds like you will do this only once. That gives you one scan thru
the whole table.

Any other method I can think of would use a self-join to get the start
and finish times of each collection period. Even with indexing,
self-joins take time.

Jul 23 '05 #9
"--CELKO--" <jc*******@eart hlink.net> wrote in message
news:11******** *************@c 13g2000cwb.goog legroups.com...
The data goes back a few decades in some cases (starting at 1965),

so I cannot evaluate it on entry .. The table holding the data has over
1.3 million records, that besides rain gauge data also has other data
types (e.g., water conductivity, water level, water chemistry etc.) in
the "Class_Name " column. <<

Ouch! I think is a case where I would go with a cursor, since it
sounds like you will do this only once. That gives you one scan thru
the whole table.

Any other method I can think of would use a self-join to get the start
and finish times of each collection period. Even with indexing,
self-joins take time.


<Uninformed musings>
As others have stated, it is difficult to gauge the efficacy of the
suggestions without knowing more about the table structure.

Is it possible for the OP to extract the "Rain" records and create a new
table with appropriate indexing? He stated there are 1.3 million records
TOTAL but how many are "Rain" records and will it reduce the work to an
acceptable level for one of the aforementioned queries to complete?

How about a hybrid approach? What if you cursor-ed through the data by
"Station Name" and use one of the queries with the additional condition of
"Station Name"? I presume a table/list of the station names exist otherwise
you could "SELECT DISTINCT". As Hugo K. has demonstrated, the queries work
well enough for a small case which this would be especially if there is an
index on "station Name".
</Uninformed musings>
Jul 23 '05 #10

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

8
2276
by: neblackcat | last post by:
Would anyone like to comment on the following idea? I was just going to offer it as a new PEP until it was suggested that I post it here for comment & consideration against PEP 308. I'm far from being a "language internist" (on Python or anything else) so go easy on me if this is stupid - it just seemed quite elegant to me as a relative newbie in town :-) I also havent got a clue whether this would be easy or even possible
28
3445
by: Benjamin Niemann | last post by:
Hello, I've been just investigating IE conditional comments - hiding things from non-IE/Win browsers is easy, but I wanted to know, if it's possible to hide code from IE/Win browsers. I found <!> in the original MSDN documentation, but this is (although it is working) unfortunately non-validating gibberish. So I fooled around trying to find a way to make it valid. And voila: <!--><!><!-->
3
5928
by: Jouke Langhout | last post by:
Hello all! For quite some time now, I've got the following problem: Access won't close properly when a user closes the application. An ACCESS process stays active and that process can only be terminated by pressing ++ and then terminate the process. I searched the entire internet and found out that there could be two things wrong (both of them are mentioned in the bug list on the access
4
3131
by: Bradley | last post by:
I have an A2000 database in which I have a continuous form with a tick box. There is also a text box with a conditional format that is based on the expression , if it's true then change the background colour. In A2000 it works great, but in A2003 the background doesn't always change and when it does it only changes when the record looses the focus. Any way around this? Is it a bug? Or have they "improved" it?
1
4986
by: GGerard | last post by:
Hello Is there a way to use a variable in the Conditional Formatting of a Textbox? Example : I want the background of a textbox in a continuous form to change color when the value of "MyField1" is True. No problem, I just need to write "Expression is" and " = -1 "
7
2329
by: John Dolinka | last post by:
I have a project of several files with #defines to setup a conditional compile. I would like to put these #defines in a single file and change the conditional compile from this one file with the #defines using an "include like" directive. C# does not have a #include directive so it appears that I can not do this. Am I missing something? Is it possible to do this in c#.
10
28626
by: John Smith | last post by:
After reading C# documentation the Conditional attribute seemed the way to go, but after inspecting the IL it seems those methods are still there and I imagine the CLR removes them. Using #if DEBUG means the code does not even reach the IL when compiling in release mode. Is there any benefit to using the Conditional Attribute? Am I right in thinking there will small performance overhead using over #if DEBUG.
8
5093
by: Olov Johansson | last post by:
I just found out that JavaScript 1.5 (I tested this with Firefox 1.0.7 and Konqueror 3.5) has support not only for standard function definitions, function expressions (lambdas) and Function constructors (these three I knew about), but also conditional function definitions, as described in http://developer.mozilla.org/en/docs/Core_JavaScript_1.5_Guide:Defining_Functions ]. An example: function fun() {
8
8483
by: Typehigh | last post by:
I have many text fields with conditional formatting applied, specifically when the condition is "Field Has Focus". Without any events associated with the fields the conditional formatting works perfectly. However, I have code that runs under the ON CLICK event. The code changes the focus to other controls, which means the conditional formatting is no longer displayed. This part makes sense to me. Here's what doesn't make sense. The last...
5
2898
by: paulo | last post by:
Can anyone please tell me how the C language interprets the following code: #include <stdio.h> int main(void) { int a = 1; int b = 10; int x = 3;
0
8403
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However, people are often confused as to whether an ONU can Work As a Router. In this blog post, we’ll explore What is ONU, What Is Router, ONU & Router’s main usage, and What is the difference between ONU and Router. Let’s take a closer look ! Part I. Meaning of...
1
8509
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows Update option using the Control Panel or Settings app; it automatically checks for updates and installs any it finds, whether you like it or not. For most users, this new feature is actually very convenient. If you want to control the update process,...
0
8610
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each protocol has its own unique characteristics and advantages, but as a user who is planning to build a smart home system, I am a bit confused by the choice of these technologies. I'm particularly interested in Zigbee because I've heard it does some...
0
7345
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing, and deployment—without human intervention. Imagine an AI that can take a project description, break it down, write the code, debug it, and then launch it, all on its own.... Now, this would greatly impact the work of software developers. The idea...
0
5636
by: conductexam | last post by:
I have .net C# application in which I am extracting data from word file and save it in database particularly. To store word all data as it is I am converting the whole word file firstly in HTML and then checking html paragraph one by one. At the time of converting from word file to html my equations which are in the word document file was convert into image. Globals.ThisAddIn.Application.ActiveDocument.Select();...
0
4327
by: adsilva | last post by:
A Windows Forms form does not have the event Unload, like VB6. What one acts like?
1
2735
by: 6302768590 | last post by:
Hai team i want code for transfer the data from one system to another through IP address by using C# our system has to for every 5mins then we have to update the data what the data is updated we have to send another system
2
1967
muto222
by: muto222 | last post by:
How can i add a mobile payment intergratation into php mysql website.
2
1730
bsmnconsultancy
by: bsmnconsultancy | last post by:
In today's digital era, a well-designed website is crucial for businesses looking to succeed. Whether you're a small business owner or a large corporation in Toronto, having a strong online presence can significantly impact your brand's success. BSMN Consultancy, a leader in Website Development in Toronto offers valuable insights into creating effective websites that not only look great but also perform exceptionally well. In this comprehensive...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.