473,473 Members | 2,032 Online
Bytes | Software Development & Data Engineering Community
Create Post

Home Posts Topics Members FAQ

transposed query?

Hi;
I've got problems with a query. I'm not sure if it is possible to do this
with Postgres, although I think it should be.

I had resolved these kind of queryes in MSAccess, where they are called
"cross references querys" but I can't find my way in Postgres.

My table 'muestras_rambla' is like:
date | id_punto | muestra | flow | n_nitrato ...
--------------------------+----------+---------+---------+-----------
06/02/2003 00:00:00 CET | 1 | 1 | 699.462 | 18.44
06/02/2003 00:00:00 CET | 1 | 2 | 699.462 | 13.79
06/02/2003 00:00:00 CET | 2 | 1 | 341.05 | 11.15
06/02/2003 00:00:00 CET | 2 | 2 | 341.05 | 17.3
06/02/2003 00:00:00 CET | 3 | 1 | 514.05 | 22.17
06/02/2003 00:00:00 CET | 3 | 2 | 514.05 | 16.62
06/02/2003 00:00:00 CET | 4 | 1 | 466.884 | 18.99
06/02/2003 00:00:00 CET | 4 | 2 | 466.884 | 19.02
06/02/2003 00:00:00 CET | 6 | 1 | 30.012 | 5.91
06/02/2003 00:00:00 CET | 6 | 2 | 30.012 | 4.15
19/02/2003 00:00:00 CET | 1 | 1 | 911.43 | 17.66
19/02/2003 00:00:00 CET | 1 | 2 | 911.43 | 9.23
....

where I 'id_punto' can take the values from 1 to 6, and muestra is 1 or 2.
Commonly, for each 'date' and 'id_punto' we have two samples (muestra), and
the 'flow' is also commonly the same for the two samples.

The case is that for every 'date' I want to have the 'time series' of flows
in the way:
date 1 2 3 4 6
03/10/2002 606.92 323.08 440.69 393.61 5.49
17/10/2002 348.19 400.32 319.33 211.26 2.53
29/10/2002 411.57 409.66 310.37 362.20 9.36
14/11/2002 446.96 373.72 302.79 348.69 5.18
26/11/2002 381.20 386.85 307.77 240.75 6.41
12/12/2002 416.84 307.03 317.94 249.36 5.64
26/12/2002 743.04 380.30 539.94 521.60 3.69
09/01/2003 403.07 301.16 370.30 379.41 3.69
....

where the names of the rows are the values in 'id_punto' and the values of
the data are the average (for every 'id_punto' and 'date') between the two
samples (muestra).

In MSaccess this is solve with:
TRANSFORM avg(muestras_rambla.flow) AS avg_flow
SELECT muestras_rambla.date
FROM muestras_rambla
GROUP BY muestras_rambla.date
PIVOT muestras_rambla.id_punto;

All I can do in postgres is :
--------
SELECT muestras_rambla.fecha, id_punto, avg(caudal) AS avg_caudal
FROM muestras_rambla
GROUP BY fecha, id_punto
--------
to get:
fecha | id_punto | avg_caudal
--------------------------+----------+------------------
03/10/2002 00:00:00 CEST | 1 | 606.924
03/10/2002 00:00:00 CEST | 2 | 323.079
03/10/2002 00:00:00 CEST | 3 | 440.690
....
But, how could manage this to get the 'time series' table?

Thanks for your help and best wishes

Javier

---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faqs/FAQ.html

Nov 12 '05 #1
3 4314
javier garcia - CEBAS wrote:
Hi;
I've got problems with a query. I'm not sure if it is possible to do this
with Postgres, although I think it should be.

I had resolved these kind of queryes in MSAccess, where they are called
"cross references querys" but I can't find my way in Postgres.

My table 'muestras_rambla' is like:
date | id_punto | muestra | flow | n_nitrato ...
--------------------------+----------+---------+---------+-----------
06/02/2003 00:00:00 CET | 1 | 1 | 699.462 | 18.44
06/02/2003 00:00:00 CET | 1 | 2 | 699.462 | 13.79
06/02/2003 00:00:00 CET | 2 | 1 | 341.05 | 11.15
06/02/2003 00:00:00 CET | 2 | 2 | 341.05 | 17.3
06/02/2003 00:00:00 CET | 3 | 1 | 514.05 | 22.17
06/02/2003 00:00:00 CET | 3 | 2 | 514.05 | 16.62
06/02/2003 00:00:00 CET | 4 | 1 | 466.884 | 18.99
06/02/2003 00:00:00 CET | 4 | 2 | 466.884 | 19.02
06/02/2003 00:00:00 CET | 6 | 1 | 30.012 | 5.91
06/02/2003 00:00:00 CET | 6 | 2 | 30.012 | 4.15
19/02/2003 00:00:00 CET | 1 | 1 | 911.43 | 17.66
19/02/2003 00:00:00 CET | 1 | 2 | 911.43 | 9.23
...

where I 'id_punto' can take the values from 1 to 6, and muestra is 1 or 2.
Commonly, for each 'date' and 'id_punto' we have two samples (muestra), and
the 'flow' is also commonly the same for the two samples.

The case is that for every 'date' I want to have the 'time series' of flows
in the way:
date 1 2 3 4 6
03/10/2002 606.92 323.08 440.69 393.61 5.49
17/10/2002 348.19 400.32 319.33 211.26 2.53
29/10/2002 411.57 409.66 310.37 362.20 9.36
14/11/2002 446.96 373.72 302.79 348.69 5.18
26/11/2002 381.20 386.85 307.77 240.75 6.41
12/12/2002 416.84 307.03 317.94 249.36 5.64
26/12/2002 743.04 380.30 539.94 521.60 3.69
09/01/2003 403.07 301.16 370.30 379.41 3.69
...

where the names of the rows are the values in 'id_punto' and the values of
the data are the average (for every 'id_punto' and 'date') between the two
samples (muestra).

In MSaccess this is solve with:
TRANSFORM avg(muestras_rambla.flow) AS avg_flow
SELECT muestras_rambla.date
FROM muestras_rambla
GROUP BY muestras_rambla.date
PIVOT muestras_rambla.id_punto;

All I can do in postgres is :
--------
SELECT muestras_rambla.fecha, id_punto, avg(caudal) AS avg_caudal
FROM muestras_rambla
GROUP BY fecha, id_punto
--------
to get:
fecha | id_punto | avg_caudal
--------------------------+----------+------------------
03/10/2002 00:00:00 CEST | 1 | 606.924
03/10/2002 00:00:00 CEST | 2 | 323.079
03/10/2002 00:00:00 CEST | 3 | 440.690
...
But, how could manage this to get the 'time series' table?

Thanks for your help and best wishes

Javier

---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faqs/FAQ.html


How about:
SELECT
date,
(SELECT s1.flow FROM muestras_rambla s1 WHERE s1.date=t1.date AND
s1.id_punto=1) AS flow_1,
(SELECT s2.flow FROM muestras_rambla s2 WHERE s2.date=t1.date AND
s1.id_punto=2) AS flow_2,
(SELECT s3.flow FROM muestras_rambla s3 WHERE s3.date=t1.date AND
s1.id_punto=3) AS flow_3,
(SELECT s4.flow FROM muestras_rambla s4 WHERE s4.date=t1.date AND
s1.id_punto=4) AS flow_4,
(SELECT s6.flow FROM muestras_rambla s6 WHERE s6.date=t1.date AND
s1.id_punto=6) AS flow_6
FROM
muestras_rambla t1
GROUP BY
t1.date;
This uses sub-queries in the Select clause to get the data in columns.
There may be another way.

Nick Barr


Nov 12 '05 #2
Nick Barr wrote:
javier garcia - CEBAS wrote:
Hi;
I've got problems with a query. I'm not sure if it is possible to do
this with Postgres, although I think it should be.

I had resolved these kind of queryes in MSAccess, where they are
called "cross references querys" but I can't find my way in Postgres.

My table 'muestras_rambla' is like:
date | id_punto | muestra | flow | n_nitrato ...
--------------------------+----------+---------+---------+-----------
06/02/2003 00:00:00 CET | 1 | 1 | 699.462 | 18.44
06/02/2003 00:00:00 CET | 1 | 2 | 699.462 | 13.79
06/02/2003 00:00:00 CET | 2 | 1 | 341.05 | 11.15
06/02/2003 00:00:00 CET | 2 | 2 | 341.05 | 17.3
06/02/2003 00:00:00 CET | 3 | 1 | 514.05 | 22.17
06/02/2003 00:00:00 CET | 3 | 2 | 514.05 | 16.62
06/02/2003 00:00:00 CET | 4 | 1 | 466.884 | 18.99
06/02/2003 00:00:00 CET | 4 | 2 | 466.884 | 19.02
06/02/2003 00:00:00 CET | 6 | 1 | 30.012 | 5.91
06/02/2003 00:00:00 CET | 6 | 2 | 30.012 | 4.15
19/02/2003 00:00:00 CET | 1 | 1 | 911.43 | 17.66
19/02/2003 00:00:00 CET | 1 | 2 | 911.43 | 9.23
...

where I 'id_punto' can take the values from 1 to 6, and muestra is 1
or 2. Commonly, for each 'date' and 'id_punto' we have two samples
(muestra), and the 'flow' is also commonly the same for the two samples.

The case is that for every 'date' I want to have the 'time series' of
flows in the way:
date 1 2 3 4 6
03/10/2002 606.92 323.08 440.69 393.61 5.49
17/10/2002 348.19 400.32 319.33 211.26 2.53
29/10/2002 411.57 409.66 310.37 362.20 9.36
14/11/2002 446.96 373.72 302.79 348.69 5.18
26/11/2002 381.20 386.85 307.77 240.75 6.41
12/12/2002 416.84 307.03 317.94 249.36 5.64
26/12/2002 743.04 380.30 539.94 521.60 3.69
09/01/2003 403.07 301.16 370.30 379.41 3.69
...

where the names of the rows are the values in 'id_punto' and the
values of the data are the average (for every 'id_punto' and 'date')
between the two samples (muestra).

In MSaccess this is solve with:
TRANSFORM avg(muestras_rambla.flow) AS avg_flow
SELECT muestras_rambla.date
FROM muestras_rambla
GROUP BY muestras_rambla.date
PIVOT muestras_rambla.id_punto;

All I can do in postgres is :
--------
SELECT muestras_rambla.fecha, id_punto, avg(caudal) AS avg_caudal
FROM muestras_rambla
GROUP BY fecha, id_punto
--------
to get:
fecha | id_punto | avg_caudal
--------------------------+----------+------------------
03/10/2002 00:00:00 CEST | 1 | 606.924
03/10/2002 00:00:00 CEST | 2 | 323.079
03/10/2002 00:00:00 CEST | 3 | 440.690
...
But, how could manage this to get the 'time series' table?

Thanks for your help and best wishes

Javier

---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faqs/FAQ.html


How about:
SELECT
date,
(SELECT s1.flow FROM muestras_rambla s1 WHERE s1.date=t1.date AND
s1.id_punto=1) AS flow_1,
(SELECT s2.flow FROM muestras_rambla s2 WHERE s2.date=t1.date AND
s1.id_punto=2) AS flow_2,
(SELECT s3.flow FROM muestras_rambla s3 WHERE s3.date=t1.date AND
s1.id_punto=3) AS flow_3,
(SELECT s4.flow FROM muestras_rambla s4 WHERE s4.date=t1.date AND
s1.id_punto=4) AS flow_4,
(SELECT s6.flow FROM muestras_rambla s6 WHERE s6.date=t1.date AND
s1.id_punto=6) AS flow_6
FROM
muestras_rambla t1
GROUP BY
t1.date;
This uses sub-queries in the Select clause to get the data in columns.
There may be another way.

Nick Barr


Ooops, having tested that query I found a typo (or four). Try:

SELECT
date,
(SELECT s1.flow FROM muestras_rambla s1 WHERE s1.date=t1.date AND
s1.id_punto=1) AS flow_1,
(SELECT s2.flow FROM muestras_rambla s2 WHERE s2.date=t1.date AND
s2.id_punto=2) AS flow_2,
(SELECT s3.flow FROM muestras_rambla s3 WHERE s3.date=t1.date AND
s3.id_punto=3) AS flow_3,
(SELECT s4.flow FROM muestras_rambla s4 WHERE s4.date=t1.date AND
s4.id_punto=4) AS flow_4,
(SELECT s6.flow FROM muestras_rambla s6 WHERE s6.date=t1.date AND
s6.id_punto=6) AS flow_6
FROM
muestras_rambla t1
GROUP BY
t1.date;

Nick Barr
Nov 12 '05 #3
Nick Barr wrote:
called "cross references querys" but I can't find my way in Postgres.

My table 'muestras_rambla' is like:
date | id_punto | muestra | flow | n_nitrato ...
--------------------------+----------+---------+---------+-----------
06/02/2003 00:00:00 CET | 1 | 1 | 699.462 | 18.44
06/02/2003 00:00:00 CET | 1 | 2 | 699.462 | 13.79
06/02/2003 00:00:00 CET | 2 | 1 | 341.05 | 11.15
06/02/2003 00:00:00 CET | 2 | 2 | 341.05 | 17.3
06/02/2003 00:00:00 CET | 3 | 1 | 514.05 | 22.17
06/02/2003 00:00:00 CET | 3 | 2 | 514.05 | 16.62
06/02/2003 00:00:00 CET | 4 | 1 | 466.884 | 18.99
06/02/2003 00:00:00 CET | 4 | 2 | 466.884 | 19.02
06/02/2003 00:00:00 CET | 6 | 1 | 30.012 | 5.91
06/02/2003 00:00:00 CET | 6 | 2 | 30.012 | 4.15
19/02/2003 00:00:00 CET | 1 | 1 | 911.43 | 17.66
19/02/2003 00:00:00 CET | 1 | 2 | 911.43 | 9.23
...

where I 'id_punto' can take the values from 1 to 6, and muestra is 1
or 2. Commonly, for each 'date' and 'id_punto' we have two samples
(muestra), and the 'flow' is also commonly the same for the two samples.

The case is that for every 'date' I want to have the 'time series' of
flows in the way:
date 1 2 3 4 6
03/10/2002 606.92 323.08 440.69 393.61 5.49
17/10/2002 348.19 400.32 319.33 211.26 2.53


See the crosstab function in contrib/tablefunc. Here's what is looks
like given the above data:

select * from crosstab(
'select thedate, id_punto, avg(flow) from muestras_rambla
group by thedate, id_punto order by 1,2',
'select distinct id_punto from muestras_rambla
order by 1'
) as (thedate timestamp with time zone,
c1 float8, c2 float8, c3 float8, c4 float8, c6 float8);

thedate | c1 | c2 | c3 | c4 | c6
------------------------+---------+--------+--------+---------+--------
2003-02-05 15:00:00-08 | 699.462 | 341.05 | 514.05 | 466.884 | 30.012
2003-02-18 15:00:00-08 | 911.43 | | | |
(2 rows)

This form of crosstab() requires 7.4RC1 or a 7.3 backpatched copy from here:
http://www.joeconway.com/

HTH,

Joe
---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to ma*******@postgresql.org so that your
message can get through to the mailing list cleanly

Nov 12 '05 #4

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

2
by: jaysonsch | last post by:
Hello! I am having some problems with a database query that I am trying to do. I am trying to develop a way to search a database for an entry and then edit the existing values. Upon submit, the...
29
by: shank | last post by:
1) I'm getting this error: Syntax error (missing operator) in query expression on the below statement. Can I get some advice. 2) I searched ASPFAQ and came up blank. Where can find the "rules"...
9
by: netpurpose | last post by:
I need to extract data from this table to find the lowest prices of each product as of today. The product will be listed/grouped by the name only, discarding the product code - I use...
2
by: Scott Levine | last post by:
Does anyone have a UDF or Stored Procedure that checks for transposed numbers in a group?
3
by: Harvey | last post by:
Hi, I try to write an asp query form that lets client search any text-string and display all pages in my web server that contain the text. I have IIS 6.0 on a server 2003. The MSDN site says...
6
by: JJA | last post by:
I would like some advice on a data and query problem I face. I have a data table with a "raw key" value which is not guaranteed to be valid at its source. Normally, this value will be 9 numeric...
0
by: Roger | last post by:
I have a datagrid showing the following.... 3114 BUF 3/25/2005 A 3114 BUF 3/24/2005 A 3114 BUF 3/23/2005 B .. I have transposed this to Site Ext 3/25/2005 ...
1
by: biddut80bd | last post by:
I have a table like this - name permanent_add ph1 ph2 ph3 noor xyz 16 17 18 farhan pqr 24 25 26 I want a query that will provide data in the following way - ...
2
by: jelena1290 | last post by:
Hi, I desperately need help here.... I have 2 tables to start with: Table1: Data_2009 ID filiale product plan fakt 10 filiale 4 product 1 3 filiale 3 product 1 2 filiale...
0
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However,...
0
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can...
0
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven...
1
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows...
0
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each...
0
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing,...
0
by: conductexam | last post by:
I have .net C# application in which I am extracting data from word file and save it in database particularly. To store word all data as it is I am converting the whole word file firstly in HTML and...
0
by: 6302768590 | last post by:
Hai team i want code for transfer the data from one system to another through IP address by using C# our system has to for every 5mins then we have to update the data what the data is updated ...
1
muto222
php
by: muto222 | last post by:
How can i add a mobile payment intergratation into php mysql website.

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.