473,378 Members | 1,380 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,378 software developers and data experts.

website doc search is extremely SLOW

Trying to use the 'search' in the docs section of PostgreSQL.org
is extremely SLOW. Considering this is a website for a database
and databases are supposed to be good for indexing content, I'd
expect a much faster performance.

I submitted my search over two minutes ago. I just finished this
email to the list. The results have still not come back. I only
searched for:

SECURITY INVOKER

Perhaps this should be worked on?

Dante

---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes do not match

Nov 12 '05
83 5827
While you are in there - consider looking at effective_cache_size too.
Set it to something like your average buffer cache memory.

As I understand this, it only effects the choices of possible plans - so
with the default (1000) some good ones that use more memory may be
ignored (mind you - some really bad ones may be ignored too).

best wishes

Mark

Tom Lane wrote:

is this something that can be set database wide,


Yeah, see default_statistics_target in postgresql.conf.

---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to ma*******@postgresql.org)

Nov 12 '05 #51
"Marc G. Fournier" <sc*****@postgresql.org> writes:
On Thu, 1 Jan 2004, Tom Lane wrote:
"Marc G. Fournier" <sc*****@postgresql.org> writes:
what sort of impact does CLUSTER have on the system? For instance, an
index happens nightly, so I'm guessing that I'll have to CLUSTER each
right after?
Depends; what does the "index" process do --- are ndict8 and friends
rebuilt from scratch?

nope, but heavily updated ... basically, the indexer looks at url for what
urls need to be 're-indexed' ... if it does, it removed all words from the
ndict# tables that belong to that url, and re-adds accordingly ...


Hmm, but in practice only a small fraction of the pages on the site
change in any given day, no? I'd think the typical nightly run changes
only a small fraction of the entries in the tables, if it is smart
enough not to re-index pages that did not change.

My guess is that it'd be enough to re-cluster once a week or so.

But this is pointless speculation until we find out whether clustering
helps enough to make it worth maintaining clustered-ness at all. Did
you get any results yet?

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faqs/FAQ.html

Nov 12 '05 #52
On Thu, 1 Jan 2004, Tom Lane wrote:
"Marc G. Fournier" <sc*****@postgresql.org> writes:
On Thu, 1 Jan 2004, Tom Lane wrote:
"Marc G. Fournier" <sc*****@postgresql.org> writes:
what sort of impact does CLUSTER have on the system? For instance, an
index happens nightly, so I'm guessing that I'll have to CLUSTER each
right after?

Depends; what does the "index" process do --- are ndict8 and friends
rebuilt from scratch?
nope, but heavily updated ... basically, the indexer looks at url for what
urls need to be 're-indexed' ... if it does, it removed all words from the
ndict# tables that belong to that url, and re-adds accordingly ...


Hmm, but in practice only a small fraction of the pages on the site
change in any given day, no? I'd think the typical nightly run changes
only a small fraction of the entries in the tables, if it is smart
enough not to re-index pages that did not change.


that is correct, and I further restrict it to 10000 URLs a night ...
My guess is that it'd be enough to re-cluster once a week or so.

But this is pointless speculation until we find out whether clustering
helps enough to make it worth maintaining clustered-ness at all. Did
you get any results yet?


Its doing the CLUSTERing right now ... will post results once finished ...

----
Marc G. Fournier Hub.Org Networking Services (http://www.hub.org)
Email: sc*****@hub.org Yahoo!: yscrappy ICQ: 7615664

---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faqs/FAQ.html

Nov 12 '05 #53
Yup,

So slow in fact that I never use it. I did once or twice and gave up.
It is ironic! I only come to the online docs when I already know the
<where> part of my search and just go to that part or section. For
everything else, there's google!

SECURITY INVOKER site:postgresql.org

Searched pages from postgresql.org for SECURITY INVOKER. Results 1 -
10 of about 141. Search took 0.23 seconds.
Ahhh, that's better.

Or use site:www.postgresql.org to avoid the archive listings, etc.

== Ezra Epstein

""D. Dante Lorenso"" <da***@lorenso.com> wrote in message
news:3F**************@lorenso.com...
Trying to use the 'search' in the docs section of PostgreSQL.org
is extremely SLOW. Considering this is a website for a database
and databases are supposed to be good for indexing content, I'd
expect a much faster performance.

I submitted my search over two minutes ago. I just finished this
email to the list. The results have still not come back. I only
searched for:

SECURITY INVOKER

Perhaps this should be worked on?

Dante

---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes do not match

Nov 12 '05 #54
On Thu, 1 Jan 2004, Tom Lane wrote:
"Marc G. Fournier" <sc*****@postgresql.org> writes:
On Thu, 1 Jan 2004, Tom Lane wrote:
"Marc G. Fournier" <sc*****@postgresql.org> writes:
what sort of impact does CLUSTER have on the system? For instance, an
index happens nightly, so I'm guessing that I'll have to CLUSTER each
right after?

Depends; what does the "index" process do --- are ndict8 and friends
rebuilt from scratch?

nope, but heavily updated ... basically, the indexer looks at url for what
urls need to be 're-indexed' ... if it does, it removed all words from the
ndict# tables that belong to that url, and re-adds accordingly ...


Hmm, but in practice only a small fraction of the pages on the site
change in any given day, no? I'd think the typical nightly run changes
only a small fraction of the entries in the tables, if it is smart
enough not to re-index pages that did not change.

My guess is that it'd be enough to re-cluster once a week or so.

But this is pointless speculation until we find out whether clustering
helps enough to make it worth maintaining clustered-ness at all. Did
you get any results yet?


Here is post-CLUSTER:

QUERY PLAN
----------------------------------------------------------------------------------------------------------------------------------
Nested Loop (cost=0.00..19470.40 rows=1952 width=8) (actual time=39.639..4200.376 rows=13415 loops=1)
-> Index Scan using n8_word on ndict8 (cost=0.00..70.90 rows=3253 width=8) (actual time=37.047..2802.400 rows=15533 loops=1)
Index Cond: (word_id = 417851441)
-> Index Scan using url_rec_id on url (cost=0.00..5.95 rows=1 width=4) (actual time=0.061..0.068 rows=1 loops=15533)
Index Cond: (url.rec_id = "outer".url_id)
Filter: (url ~~ 'http://archives.postgresql.org/%%'::text)
Total runtime: 4273.799 ms
(7 rows)

And ... shit ... just tried a search on 'security invoker', and results
back in 2 secs ... 'multi version', 18 secs ... 'mnogosearch', .32sec ...
'mnogosearch performance', 18secs ...

this is closer to what I expect from PostgreSQL ...

I'm still loading the 'WITHOUT OIDS' database ... should I expect that,
with CLUSTERing, its performance would be slightly better yet, or would
the difference be negligible?

----
Marc G. Fournier Hub.Org Networking Services (http://www.hub.org)
Email: sc*****@hub.org Yahoo!: yscrappy ICQ: 7615664

---------------------------(end of broadcast)---------------------------
TIP 8: explain analyze is your friend

Nov 12 '05 #55
"Marc G. Fournier" <sc*****@postgresql.org> writes:
I'm still loading the 'WITHOUT OIDS' database ... should I expect that,
with CLUSTERing, its performance would be slightly better yet, or would
the difference be negligible?


I think the difference will be marginal, but worth doing; you're
reducing the row size from 40 bytes to 36 if I counted correctly,
so circa-10% I/O saving, no?

24 bytes minimum 7.4 HeapTupleHeader
4 bytes OID
12 bytes three int4 fields

On a machine with 8-byte MAXALIGN, this would not help, but on
Intel hardware it should.

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 8: explain analyze is your friend

Nov 12 '05 #56
On Thu, 1 Jan 2004, Tom Lane wrote:
"Marc G. Fournier" <sc*****@postgresql.org> writes:
I'm still loading the 'WITHOUT OIDS' database ... should I expect that,
with CLUSTERing, its performance would be slightly better yet, or would
the difference be negligible?


I think the difference will be marginal, but worth doing; you're
reducing the row size from 40 bytes to 36 if I counted correctly,
so circa-10% I/O saving, no?

24 bytes minimum 7.4 HeapTupleHeader
4 bytes OID
12 bytes three int4 fields

On a machine with 8-byte MAXALIGN, this would not help, but on
Intel hardware it should.


I take it there is no way of drop'ng OIDs after the fact, eh? :)

----
Marc G. Fournier Hub.Org Networking Services (http://www.hub.org)
Email: sc*****@hub.org Yahoo!: yscrappy ICQ: 7615664

---------------------------(end of broadcast)---------------------------
TIP 7: don't forget to increase your free space map settings

Nov 12 '05 #57
"Marc G. Fournier" <sc*****@postgresql.org> writes:
I take it there is no way of drop'ng OIDs after the fact, eh? :)


I think we have an ALTER TABLE DROP OIDS command, but it won't instantly
remove the OIDS from the table --- removal happens incrementally as rows
get updated. Maybe that's good enough for your situation though.

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 7: don't forget to increase your free space map settings

Nov 12 '05 #58
On Thu, 1 Jan 2004, Tom Lane wrote:
"Marc G. Fournier" <sc*****@postgresql.org> writes:
I take it there is no way of drop'ng OIDs after the fact, eh? :)


I think we have an ALTER TABLE DROP OIDS command, but it won't instantly
remove the OIDS from the table --- removal happens incrementally as rows
get updated. Maybe that's good enough for your situation though.


actually, that would be perfect ... saves having to spend the many many
hours to re-index all the URLs, and will at least give a gradual
improvement :)

----
Marc G. Fournier Hub.Org Networking Services (http://www.hub.org)
Email: sc*****@hub.org Yahoo!: yscrappy ICQ: 7615664

---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster

Nov 12 '05 #59

Note that I haven't made many changes to the postgresql.conf file, so
there might be something really obvious I've overlooked, but here are the
uncommented ones (ie. ones I've modified from defaults):

tcpip_socket = true
max_connections = 512
shared_buffers = 10000 # min 16, at least max_connections*2, 8KB each
sort_mem = 10240 # min 64, size in KB
vacuum_mem = 81920 # min 1024, size in KB


what about effective_cache_size and random_page_cost?
Sincerely,

Joshua D. Drake

syslog = 2 # range 0-2; 0=stdout; 1=both; 2=syslog
syslog_facility = 'LOCAL0'
syslog_ident = 'postgres'
log_connections = true
log_duration = false
log_statement = false
lc_messages = 'C' # locale for system error message strings
lc_monetary = 'C' # locale for monetary formatting
lc_numeric = 'C' # locale for number formatting
lc_time = 'C' # locale for time formatting
----
Marc G. Fournier Hub.Org Networking Services (http://www.hub.org)
Email: sc*****@hub.org Yahoo!: yscrappy ICQ: 7615664

---------------------------(end of broadcast)---------------------------
TIP 6: Have you searched our list archives?

http://archives.postgresql.org

--
Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC
Postgresql support, programming shared hosting and dedicated hosting.
+1-503-222-2783 - jd@commandprompt.com - http://www.commandprompt.com
Editor-N-Chief - PostgreSQl.Org - http://www.postgresql.org

---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster

Nov 12 '05 #60
>


A content management system is long overdue I think, do you have any
good recommendations?

Bricolage
--
Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC
Postgresql support, programming shared hosting and dedicated hosting.
+1-503-222-2783 - jd@commandprompt.com - http://www.commandprompt.com

Nov 12 '05 #61
On Sat, 3 Jan 2004, Dave Cramer wrote:
On Sat, 2004-01-03 at 09:49, Oleg Bartunov wrote:
Hi there,

I hoped to release pilot version of www.pgsql.ru with full text search
of postgresql related resources (currently we've crawled 27 sites, about
340K pages) but we started celebration NY too early :)
Expect it tomorrow or monday.

Fantastic!


I'm just working on web interface to give people possibility to choose
collection of documents to search, for example: 7.1 documentation, 7.4 documentation



I'm not sure is there are some kind of CMS on www.postgresql.org, but
if it's there the best way is to embed tsearch2 into CMS. You'll have
fast, incremental search engine. There are many users of tsearch2 and I think
embedding isn't very difficult problem. I estimate there are maximum
10-20K pages of documentation, nothing for tsearch2.


A content management system is long overdue I think, do you have any
good recommendations?


*.postgresql.org likes PHP, so let's see in Google for 'php cms' :)


Regards,
Oleg
__________________________________________________ ___________
Oleg Bartunov, sci.researcher, hostmaster of AstroNet,
Sternberg Astronomical Institute, Moscow University (Russia)
Internet: ol**@sai.msu.su, http://www.sai.msu.su/~megera/
phone: +007(095)939-16-83, +007(095)939-23-83

---------------------------(end of broadcast)---------------------------
TIP 8: explain analyze is your friend

Nov 12 '05 #62
On Thu, 1 Jan 2004, Marc G. Fournier wrote:
On Thu, 1 Jan 2004, Bruce Momjian wrote:
Marc G. Fournier wrote:
186_archives=# \d ndict7
Table "public.ndict7"
Column | Type | Modifiers
---------+---------+--------------------
url_id | integer | not null default 0
word_id | integer | not null default 0
intag | integer | not null default 0
Indexes:
"n7_url" btree (url_id)
"n7_word" btree (word_id)
The slowdown is the LIKE condition, as the ndict[78] word_id conditions
return near instantly when run individually, and when I run the 'url/LIKE'
condition, it takes "forever" ...
Does it help to CLUSTER url.url? Is your data being loaded in so
identical values used by LIKE are next to each other?


Just tried CLUSTER, and no difference, but ... chat'd with Dave on ICQ
this evening, and was thinking of something ... and it comes back to
something that I mentioned awhile back ...

Taking the ndict8 query that I originally presented, now post CLUSTER, and
an explain analyze looks like:

QUERY PLAN
---------------------------------------------------------------------------------------------------------------------------------------
Hash Join (cost=13918.23..26550.58 rows=17 width=8) (actual time=4053.403..83481.769 rows=13415 loops=1)
Hash Cond: ("outer".url_id = "inner".rec_id)
-> Index Scan using n8_word on ndict8 (cost=0.00..12616.09 rows=3219 width=8) (actual time=113.645..79163.431 rows=15533 loops=1)
Index Cond: (word_id = 417851441)
-> Hash (cost=13913.31..13913.31 rows=1968 width=4) (actual time=3920.597..3920.597 rows=0 loops=1)
-> Seq Scan on url (cost=0.00..13913.31 rows=1968 width=4) (actual time=3.837..2377.853 rows=304811 loops=1)
Filter: ((url || ''::text) ~~ 'http://archives.postgresql.org/%%'::text)
Total runtime: 83578.572 ms
(8 rows)

Now, if I knock off the LIKE, so that I'm returning all rows from ndict8,
join'd to all the URLs that contain them, you get:

QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------------------
Nested Loop (cost=0.00..30183.13 rows=3219 width=8) (actual time=0.299..1217.116 rows=15533 loops=1)
-> Index Scan using n8_word on ndict8 (cost=0.00..12616.09 rows=3219 width=8) (actual time=0.144..458.891 rows=15533 loops=1)
Index Cond: (word_id = 417851441)
-> Index Scan using url_rec_id on url (cost=0.00..5.44 rows=1 width=4) (actual time=0.024..0.029 rows=1 loops=15533)
Index Cond: (url.rec_id = "outer".url_id)
Total runtime: 1286.647 ms
(6 rows)

So, there are 15333 URLs that contain that word ... now, what I want to
find out is how many of those 15333 URLs contain
'http://archives.postgresql.org/%%', which is 13415 ...


what's the need for such query ? Are you trying to restrict search to
archives ? Why not just have site attribute for document and use simple
join ?

The problem is that right now, we look at the LIKE first, giving us ~300k
rows, and then search through those for those who have the word matching
... is there some way of reducing the priority of the LIKE part of the
query, as far as the planner is concerned, so that it will "resolve" the =
first, and then work the LIKE on the resultant set, instead of the other
way around? So that the query is only checking 15k records for the 13k
that match, instead of searching through 300k?

I'm guessing that the reason that the LIKE is taking precidence(sp?) is
because the URL table has less rows in it then ndict8?

----
Marc G. Fournier Hub.Org Networking Services (http://www.hub.org)
Email: sc*****@hub.org Yahoo!: yscrappy ICQ: 7615664

---------------------------(end of broadcast)---------------------------
TIP 8: explain analyze is your friend


Regards,
Oleg
__________________________________________________ ___________
Oleg Bartunov, sci.researcher, hostmaster of AstroNet,
Sternberg Astronomical Institute, Moscow University (Russia)
Internet: ol**@sai.msu.su, http://www.sai.msu.su/~megera/
phone: +007(095)939-16-83, +007(095)939-23-83

---------------------------(end of broadcast)---------------------------
TIP 6: Have you searched our list archives?

http://archives.postgresql.org

Nov 12 '05 #63
Hi there,

I hoped to release pilot version of www.pgsql.ru with full text search
of postgresql related resources (currently we've crawled 27 sites, about
340K pages) but we started celebration NY too early :)
Expect it tomorrow or monday.

We have developed many search engines, some of them are based on
PostgreSQL like tsearch2, OpenFTS and are best to be embedded into
CMS for true online updating. Their power comes from access to documents attributes
stored in database, so one could perform categorized search, restricted
search (different rights, different document status, etc). The most close
example would be search on archive of mailing lists, which should be
embed such kind of full text search engine. fts.postgresql.org in his best
time was one of implementation of such system. This is what I hope to have on
www.pgsql.ru, if Marc will give us access to mailing list archives :)

Another search engines we use are based on standard technology of
inverted indices, they are best suited for indexing of semi-static collections
od documents. We've full-fledged crawler, indexer and searcher. Online
update of inverted indices is rather complex technological task and I'm
not sure there are databases which have true online update. On www.pgsql.ru
we use GTSearch which is generic text search engine we developed for
vertical searches (for example, postgresql related resources). It has
common set of features like phrase search, proximity ranking, site search,
morphology, stemming support, cached documents, spell checking, similar search
etc.

I see several separate tasks:

* official documents (documentation mostly)

I'm not sure is there are some kind of CMS on www.postgresql.org, but
if it's there the best way is to embed tsearch2 into CMS. You'll have
fast, incremental search engine. There are many users of tsearch2 and I think
embedding isn't very difficult problem. I estimate there are maximum
10-20K pages of documentation, nothing for tsearch2.

* mailing lists archive

mailing lists archive, which is constantly growing and
also required incremental update, so tsearch2 also needed. Nice hardware
like Marc has described would be more than enough. We have moderate dual
PIII 1Ggz server and I hope it would be enough.

* postgresql related resources

I think this task should be solved using standard technique - crawler,
indexer, searcher. Due to limited number of sites it's possible to
keep indices more actual than major search engines, for example
crawl once a week. This is what we currently have on pgsql.ru because
it doesn't require any permissions and interaction with sites officials.
Regards,
Oleg
On Wed, 31 Dec 2003, Marc G. Fournier wrote:
On Tue, 30 Dec 2003, Joshua D. Drake wrote:
Hello,

Why are we not using Tsearch2?


Because nobody has built it yet? Oleg's stuff is nice, but we want
something that we can build into the existing web sites, not a standalone
site ...

I keep searching the web hoping someone has come up with a 'tsearch2'
based search engine that does the spidering, but, unless its sitting right
in front of my eyes and I'm not seeing it, I haven't found it yet :(

Out of everything I've found so far, mnogosearch is one of the best ... I
just wish I could figure out where the bottleneck for it was, since, from
reading their docs, their method of storing the data doesn't appear to be
particularly off. I'm tempted to try their caching storage manager, and
getting away from SQL totally, but I *really* want to showcase PostgreSQL
on this :(

----
Marc G. Fournier Hub.Org Networking Services (http://www.hub.org)
Email: sc*****@hub.org Yahoo!: yscrappy ICQ: 7615664

---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to ma*******@postgresql.org)


Regards,
Oleg
__________________________________________________ ___________
Oleg Bartunov, sci.researcher, hostmaster of AstroNet,
Sternberg Astronomical Institute, Moscow University (Russia)
Internet: ol**@sai.msu.su, http://www.sai.msu.su/~megera/
phone: +007(095)939-16-83, +007(095)939-23-83

---------------------------(end of broadcast)---------------------------
TIP 6: Have you searched our list archives?

http://archives.postgresql.org

Nov 12 '05 #64
On Sat, 2004-01-03 at 09:49, Oleg Bartunov wrote:
Hi there,

I hoped to release pilot version of www.pgsql.ru with full text search
of postgresql related resources (currently we've crawled 27 sites, about
340K pages) but we started celebration NY too early :)
Expect it tomorrow or monday. Fantastic!
We have developed many search engines, some of them are based on
PostgreSQL like tsearch2, OpenFTS and are best to be embedded into
CMS for true online updating. Their power comes from access to documents attributes
stored in database, so one could perform categorized search, restricted
search (different rights, different document status, etc). The most close
example would be search on archive of mailing lists, which should be
embed such kind of full text search engine. fts.postgresql.org in his best
time was one of implementation of such system. This is what I hope to have on
www.pgsql.ru, if Marc will give us access to mailing list archives :)
I too would like access to the archives.

Another search engines we use are based on standard technology of
inverted indices, they are best suited for indexing of semi-static collections
od documents. We've full-fledged crawler, indexer and searcher. Online
update of inverted indices is rather complex technological task and I'm
not sure there are databases which have true online update. On www.pgsql.ru
we use GTSearch which is generic text search engine we developed for
vertical searches (for example, postgresql related resources). It has
common set of features like phrase search, proximity ranking, site search,
morphology, stemming support, cached documents, spell checking, similar search
etc.

I see several separate tasks:

* official documents (documentation mostly)

I'm not sure is there are some kind of CMS on www.postgresql.org, but
if it's there the best way is to embed tsearch2 into CMS. You'll have
fast, incremental search engine. There are many users of tsearch2 and I think
embedding isn't very difficult problem. I estimate there are maximum
10-20K pages of documentation, nothing for tsearch2.
A content management system is long overdue I think, do you have any
good recommendations?

* mailing lists archive

mailing lists archive, which is constantly growing and
also required incremental update, so tsearch2 also needed. Nice hardware
like Marc has described would be more than enough. We have moderate dual
PIII 1Ggz server and I hope it would be enough.

* postgresql related resources

I think this task should be solved using standard technique - crawler,
indexer, searcher. Due to limited number of sites it's possible to
keep indices more actual than major search engines, for example
crawl once a week. This is what we currently have on pgsql.ru because
it doesn't require any permissions and interaction with sites officials.
Regards,
Oleg
On Wed, 31 Dec 2003, Marc G. Fournier wrote:
On Tue, 30 Dec 2003, Joshua D. Drake wrote:
Hello,

Why are we not using Tsearch2?


Because nobody has built it yet? Oleg's stuff is nice, but we want
something that we can build into the existing web sites, not a standalone
site ...

I keep searching the web hoping someone has come up with a 'tsearch2'
based search engine that does the spidering, but, unless its sitting right
in front of my eyes and I'm not seeing it, I haven't found it yet :(

Out of everything I've found so far, mnogosearch is one of the best ... I
just wish I could figure out where the bottleneck for it was, since, from
reading their docs, their method of storing the data doesn't appear to be
particularly off. I'm tempted to try their caching storage manager, and
getting away from SQL totally, but I *really* want to showcase PostgreSQL
on this :(

----
Marc G. Fournier Hub.Org Networking Services (http://www.hub.org)
Email: sc*****@hub.org Yahoo!: yscrappy ICQ: 7615664

---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to ma*******@postgresql.org)


Regards,
Oleg
__________________________________________________ ___________
Oleg Bartunov, sci.researcher, hostmaster of AstroNet,
Sternberg Astronomical Institute, Moscow University (Russia)
Internet: ol**@sai.msu.su, http://www.sai.msu.su/~megera/
phone: +007(095)939-16-83, +007(095)939-23-83

--
Dave Cramer
519 939 0336
ICQ # 1467551
---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to ma*******@postgresql.org)

Nov 12 '05 #65
>


It's good Mason driven CMS, but Marc seems is a PHP fun :)

Well I know that the advocacy site is looking at implementing Bricolage.
It seems that if we
were smart about it, we would pick a platform (application wise) and
stick with it.

Sincerely,

Joshua D. Drake



Regards,
Oleg
_________________________________________________ ____________
Oleg Bartunov, sci.researcher, hostmaster of AstroNet,
Sternberg Astronomical Institute, Moscow University (Russia)
Internet: ol**@sai.msu.su, http://www.sai.msu.su/~megera/
phone: +007(095)939-16-83, +007(095)939-23-83

--
Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC
Postgresql support, programming shared hosting and dedicated hosting.
+1-503-222-2783 - jd@commandprompt.com - http://www.commandprompt.com
Editor-N-Chief - PostgreSQl.Org - http://www.postgresql.org
Nov 12 '05 #66
On Sat, 3 Jan 2004, Joshua D. Drake wrote:

A content management system is long overdue I think, do you have any
good recommendations?
Bricolage


It's good Mason driven CMS, but Marc seems is a PHP fun :)


Regards,
Oleg
__________________________________________________ ___________
Oleg Bartunov, sci.researcher, hostmaster of AstroNet,
Sternberg Astronomical Institute, Moscow University (Russia)
Internet: ol**@sai.msu.su, http://www.sai.msu.su/~megera/
phone: +007(095)939-16-83, +007(095)939-23-83

---------------------------(end of broadcast)---------------------------
TIP 7: don't forget to increase your free space map settings

Nov 12 '05 #67
On Sat, 3 Jan 2004, Oleg Bartunov wrote:
So, there are 15333 URLs that contain that word ... now, what I want to
find out is how many of those 15333 URLs contain
'http://archives.postgresql.org/%%', which is 13415 ...


what's the need for such query ? Are you trying to restrict search to
archives ? Why not just have site attribute for document and use simple
join ?


The searches are designed so that you can do sub-section searches ... ie.
if you only wanted to search hackers, the LIKE would be:

'http://archives.postgresql.org/pgsql-hackers/%%'

while:

'http://archives.postgresql.org/%%'

would give you a search of *all* the mailing lists ...

In theory, you could go smaller and search on:

'http://archives.postgresql.org/pgsql-hackers/2003-11/%% for all messages
in November of 2003 ...
----
Marc G. Fournier Hub.Org Networking Services (http://www.hub.org)
Email: sc*****@hub.org Yahoo!: yscrappy ICQ: 7615664

---------------------------(end of broadcast)---------------------------
TIP 8: explain analyze is your friend

Nov 12 '05 #68
On Sat, 3 Jan 2004, Oleg Bartunov wrote:
time was one of implementation of such system. This is what I hope to have on
www.pgsql.ru, if Marc will give us access to mailing list archives :)


Access to the archives was provided before New Years *puzzled look* I sent
Teodor the rsync command that he needs to run to download it all from the
IP he provided me previously ...
----
Marc G. Fournier Hub.Org Networking Services (http://www.hub.org)
Email: sc*****@hub.org Yahoo!: yscrappy ICQ: 7615664

---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faqs/FAQ.html

Nov 12 '05 #69
On Sat, 3 Jan 2004, Marc G. Fournier wrote:
On Sat, 3 Jan 2004, Oleg Bartunov wrote:
time was one of implementation of such system. This is what I hope to have on
www.pgsql.ru, if Marc will give us access to mailing list archives :)
Access to the archives was provided before New Years *puzzled look* I sent
Teodor the rsync command that he needs to run to download it all from the
IP he provided me previously ...


Hmm, what's the secret rsync command you didn't share with me :)


----
Marc G. Fournier Hub.Org Networking Services (http://www.hub.org)
Email: sc*****@hub.org Yahoo!: yscrappy ICQ: 7615664

---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faqs/FAQ.html


Regards,
Oleg
__________________________________________________ ___________
Oleg Bartunov, sci.researcher, hostmaster of AstroNet,
Sternberg Astronomical Institute, Moscow University (Russia)
Internet: ol**@sai.msu.su, http://www.sai.msu.su/~megera/
phone: +007(095)939-16-83, +007(095)939-23-83

---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to ma*******@postgresql.org so that your
message can get through to the mailing list cleanly

Nov 12 '05 #70

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

I keep searching the web hoping someone has come up with a 'tsearch2'
based search engine that does the spidering, but, unless its sitting right
in front of my eyes and I'm not seeing it, I haven't found it yet :(


I wrote my own search engine for the docs back when the site was having
problems last year, and myself and some others needed a searchable
interface. It actually spidered the raw sgml pages themselves, and was
fairly quick. I can resurrect this if anyone is interested. It runs
with Perl and PostgreSQL and nothing else. :) Of course, it could probably
be modified to feed it's sgml parsing output to tsearch as well.

In the meantime, could we please switch to a simple google search? It
would require changing one or two lines of HTML source, and at least
there would be *something* until we get everything sorted out.

- --
Greg Sabino Mullane gr**@turnstep.com
PGP Key: 0x14964AC8 200401041439

-----BEGIN PGP SIGNATURE-----

iD8DBQE/+GwIvJuQZxSWSsgRAkOgAJ9lmXwd/h/d+HzPiaPUVvO/Gq1O9wCeNbmn
CidSrTYP0sc5pp/hdlIS19o=
=YPWw
-----END PGP SIGNATURE-----

---------------------------(end of broadcast)---------------------------
TIP 6: Have you searched our list archives?

http://archives.postgresql.org

Nov 12 '05 #71
On Sat, 3 Jan 2004, Oleg Bartunov wrote:
On Sat, 3 Jan 2004, Joshua D. Drake wrote:

A content management system is long overdue I think, do you have any
good recommendations?

Bricolage


It's good Mason driven CMS, but Marc seems is a PHP fun :)


Ummm, where do you derive that from? As the -www team all know (or
should), they need something installed, they ask for it ... we have OACS,
Jakarta-Tomcat, mod_perl, mod_python, etc now as it is. I personally use
PHP, but I don't use CMSs, so don't even know of a PHPbased CMS to think
to recommend ...

Personally, I use what fits the situation ... web based stuff, I generally
do in PHP ... command line, all in perl *shrug*

----
Marc G. Fournier Hub.Org Networking Services (http://www.hub.org)
Email: sc*****@hub.org Yahoo!: yscrappy ICQ: 7615664

---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to ma*******@postgresql.org so that your
message can get through to the mailing list cleanly

Nov 12 '05 #72
On Sun, 4 Jan 2004, Oleg Bartunov wrote:
On Sat, 3 Jan 2004, Marc G. Fournier wrote:
On Sat, 3 Jan 2004, Oleg Bartunov wrote:
time was one of implementation of such system. This is what I hope to have on
www.pgsql.ru, if Marc will give us access to mailing list archives :)


Access to the archives was provided before New Years *puzzled look* I sent
Teodor the rsync command that he needs to run to download it all from the
IP he provided me previously ...


Hmm, what's the secret rsync command you didn't share with me :)


Its no secret, else I wouldn't have sent it to Teodor :)

----
Marc G. Fournier Hub.Org Networking Services (http://www.hub.org)
Email: sc*****@hub.org Yahoo!: yscrappy ICQ: 7615664

---------------------------(end of broadcast)---------------------------
TIP 6: Have you searched our list archives?

http://archives.postgresql.org

Nov 12 '05 #73
Send it to me too please?

Dave
On Sun, 2004-01-04 at 04:34, Oleg Bartunov wrote:
On Sat, 3 Jan 2004, Marc G. Fournier wrote:
On Sat, 3 Jan 2004, Oleg Bartunov wrote:
time was one of implementation of such system. This is what I hope to have on
www.pgsql.ru, if Marc will give us access to mailing list archives :)


Access to the archives was provided before New Years *puzzled look* I sent
Teodor the rsync command that he needs to run to download it all from the
IP he provided me previously ...


Hmm, what's the secret rsync command you didn't share with me :)


----
Marc G. Fournier Hub.Org Networking Services (http://www.hub.org)
Email: sc*****@hub.org Yahoo!: yscrappy ICQ: 7615664

---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faqs/FAQ.html


Regards,
Oleg
__________________________________________________ ___________
Oleg Bartunov, sci.researcher, hostmaster of AstroNet,
Sternberg Astronomical Institute, Moscow University (Russia)
Internet: ol**@sai.msu.su, http://www.sai.msu.su/~megera/
phone: +007(095)939-16-83, +007(095)939-23-83

---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to ma*******@postgresql.org so that your
message can get through to the mailing list cleanly

--
Dave Cramer
519 939 0336
ICQ # 1467551
---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faqs/FAQ.html

Nov 12 '05 #74
Marc,

Can you please send it again, I can't seem to find the original message?

Dave
On Sun, 2004-01-04 at 10:03, Marc G. Fournier wrote:
On Sun, 4 Jan 2004, Oleg Bartunov wrote:
On Sat, 3 Jan 2004, Marc G. Fournier wrote:
On Sat, 3 Jan 2004, Oleg Bartunov wrote:

> time was one of implementation of such system. This is what I hope to have on
> www.pgsql.ru, if Marc will give us access to mailing list archives :)

Access to the archives was provided before New Years *puzzled look* I sent
Teodor the rsync command that he needs to run to download it all from the
IP he provided me previously ...


Hmm, what's the secret rsync command you didn't share with me :)


Its no secret, else I wouldn't have sent it to Teodor :)

----
Marc G. Fournier Hub.Org Networking Services (http://www.hub.org)
Email: sc*****@hub.org Yahoo!: yscrappy ICQ: 7615664

--
Dave Cramer
519 939 0336
ICQ # 1467551
---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faqs/FAQ.html

Nov 12 '05 #75

Check it out now and let me know what you searched on, and whether or not
you see an improvement over what it was ... Tom -and- Bruce suggested some
changes that, from my tests, show a dramatic change in speed *compared to
what it was* ... but that could just be that I'm getting lucky ...

On Thu, 1 Jan 2004, ezra epstein wrote:
Yup,

So slow in fact that I never use it. I did once or twice and gave up.
It is ironic! I only come to the online docs when I already know the
<where> part of my search and just go to that part or section. For
everything else, there's google!

SECURITY INVOKER site:postgresql.org

Searched pages from postgresql.org for SECURITY INVOKER. Results 1 -
10 of about 141. Search took 0.23 seconds.
Ahhh, that's better.

Or use site:www.postgresql.org to avoid the archive listings, etc.

== Ezra Epstein

""D. Dante Lorenso"" <da***@lorenso.com> wrote in message
news:3F**************@lorenso.com...
Trying to use the 'search' in the docs section of PostgreSQL.org
is extremely SLOW. Considering this is a website for a database
and databases are supposed to be good for indexing content, I'd
expect a much faster performance.

I submitted my search over two minutes ago. I just finished this
email to the list. The results have still not come back. I only
searched for:

SECURITY INVOKER

Perhaps this should be worked on?

Dante

---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes do not match


---------------------------(end of broadcast)---------------------------
TIP 7: don't forget to increase your free space map settings


----
Marc G. Fournier Hub.Org Networking Services (http://www.hub.org)
Email: sc*****@hub.org Yahoo!: yscrappy ICQ: 7615664

---------------------------(end of broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to ma*******@postgresql.org

Nov 12 '05 #76
On Sun, 4 Jan 2004, Greg Sabino Mullane wrote:
------------------------------------------------------------------------------
/usr/local/libexec/ppf_verify: pgp command failed

gpg: WARNING: using insecure memory!
gpg: please see http://www.gnupg.org/faq.html for more information
gpg: Signature made Sun Jan 4 15:39:52 2004 AST using DSA key ID 14964AC8
gpg: Can't check signature: public key not found
------------------------------------------------------------------------------
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

I keep searching the web hoping someone has come up with a 'tsearch2'
based search engine that does the spidering, but, unless its sitting right
in front of my eyes and I'm not seeing it, I haven't found it yet :(


I wrote my own search engine for the docs back when the site was having
problems last year, and myself and some others needed a searchable
interface. It actually spidered the raw sgml pages themselves, and was
fairly quick. I can resurrect this if anyone is interested. It runs
with Perl and PostgreSQL and nothing else. :) Of course, it could probably
be modified to feed it's sgml parsing output to tsearch as well.

In the meantime, could we please switch to a simple google search? It
would require changing one or two lines of HTML source, and at least
there would be *something* until we get everything sorted out.


Have you checked things out since Tom -and- Bruce's suggestions?

----
Marc G. Fournier Hub.Org Networking Services (http://www.hub.org)
Email: sc*****@hub.org Yahoo!: yscrappy ICQ: 7615664

---------------------------(end of broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to ma*******@postgresql.org

Nov 12 '05 #77

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Have you checked things out since Tom -and- Bruce's suggestions?


Yes, and while they are better, they still reflect badly on PostgreSQL:

Searching for "partial index" in the docs only: 12.26 seconds

Searching from that result page (which defaults to the while site)
for the word "index": 16.32 seconds

By comparison, the mysql site (which claims to be running Mnogosearch)
returns instantly, no matter the query. I like the idea of separating
the archives into their own search, which I agree with Marc should
help quite a bit.

- --
Greg Sabino Mullane gr**@turnstep.com
PGP Key: 0x14964AC8 200401041533

-----BEGIN PGP SIGNATURE-----

iD8DBQE/+HmHvJuQZxSWSsgRAmV0AJsHQ1lisSj4ur1WWyylYUjUWONU/QCgzk5p
UeN+njNAiWFA/u2+AajuC4k=
=qWcr
-----END PGP SIGNATURE-----

---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to ma*******@postgresql.org)

Nov 12 '05 #78
I searched only the 7.4 docs for create table, and it returned in 10
seconds, much better

Dave
On Sun, 2004-01-04 at 15:02, Marc G. Fournier wrote:
Check it out now and let me know what you searched on, and whether or not
you see an improvement over what it was ... Tom -and- Bruce suggested some
changes that, from my tests, show a dramatic change in speed *compared to
what it was* ... but that could just be that I'm getting lucky ...

On Thu, 1 Jan 2004, ezra epstein wrote:
Yup,

So slow in fact that I never use it. I did once or twice and gave up.
It is ironic! I only come to the online docs when I already know the
<where> part of my search and just go to that part or section. For
everything else, there's google!

SECURITY INVOKER site:postgresql.org

Searched pages from postgresql.org for SECURITY INVOKER. Results 1 -
10 of about 141. Search took 0.23 seconds.
Ahhh, that's better.

Or use site:www.postgresql.org to avoid the archive listings, etc.

== Ezra Epstein

""D. Dante Lorenso"" <da***@lorenso.com> wrote in message
news:3F**************@lorenso.com...
Trying to use the 'search' in the docs section of PostgreSQL.org
is extremely SLOW. Considering this is a website for a database
and databases are supposed to be good for indexing content, I'd
expect a much faster performance.

I submitted my search over two minutes ago. I just finished this
email to the list. The results have still not come back. I only
searched for:

SECURITY INVOKER

Perhaps this should be worked on?

Dante

---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes do not match


---------------------------(end of broadcast)---------------------------
TIP 7: don't forget to increase your free space map settings


----
Marc G. Fournier Hub.Org Networking Services (http://www.hub.org)
Email: sc*****@hub.org Yahoo!: yscrappy ICQ: 7615664

---------------------------(end of broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to ma*******@postgresql.org

--
Dave Cramer
519 939 0336
ICQ # 1467551
---------------------------(end of broadcast)---------------------------
TIP 8: explain analyze is your friend

Nov 12 '05 #79
I just tried :

i) "wal" in "7.4 docs" -> 5 seconds
ii) "partial index" in "All Sites" -> 3 seconds
iii) "column statistics" in "All Sites" -> 2 seconds
iv) "fork exec" in "All Sites" -> 2 seconds

These results seem pretty good to me...

regards

Mark



---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faqs/FAQ.html

Nov 12 '05 #80
The searches are designed so that you can do sub-section searches ... ie.
if you only wanted to search hackers, the LIKE would be:

'http://archives.postgresql.org/pgsql-hackers/%%'

while:

'http://archives.postgresql.org/%%'

would give you a search of *all* the mailing lists ...

In theory, you could go smaller and search on:

'http://archives.postgresql.org/pgsql-hackers/2003-11/%% for all messages
in November of 2003 ...


That doesn't stop you from using an extra site attribute to speed up the
typical cases of documentation searches. You could just have an int
field or something (basically enumerating the most important sites) and
have the default set to be the result of a function. Then just make a
functional index, and maybe even cluster by that attribute.

The function I am describing would have the basic form:

function site_type(url text) returns int as '
if url like 'archives.postgresql.org%' then return 1
else if url like 'www.postgresql.org/docs/%' then return 2
....
'

That way you shouldn't have to change the code that inserts into the
table, only the code that does the search.

If the table was clustered at the time of each change to the
documentation, I couldn't imagine that the documentation searches would
even take a second.

Also, even though it's kind of a performance optimization hack, it
doesn't seem unreasonable to store the smaller document sets in a
seperate table, and then have a view of the union of the two tables.
That might help the server cache the right files for quick access.

Neither of these ideas would seem to have much impact on the flexibility
of the system you designed. Both are just some optimization things that
would give a good impression to the people doing quick documentation
searches.

Regards,
Jeff

---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to ma*******@postgresql.org so that your
message can get through to the mailing list cleanly

Nov 12 '05 #81
Mark Kirkwood <ma****@paradise.net.nz> writes:
These results seem pretty good to me...


FWIW, I see pretty decent search speed when I go to
http://www.postgresql.org/search.cgi
but pretty lousy search speed when I try a similar query at
http://archives.postgresql.org/

Could we get the latter search engine cranked up?

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faqs/FAQ.html

Nov 12 '05 #82
Try www.pgsql.ru. I just released pilot version with full text searching
postgresql related resources. Search for security invoker takes 0.03 sec :)

Oleg
On Thu, 1 Jan 2004, ezra epstein wrote:
Yup,

So slow in fact that I never use it. I did once or twice and gave up.
It is ironic! I only come to the online docs when I already know the
<where> part of my search and just go to that part or section. For
everything else, there's google!

SECURITY INVOKER site:postgresql.org

Searched pages from postgresql.org for SECURITY INVOKER. Results 1 -
10 of about 141. Search took 0.23 seconds.
Ahhh, that's better.

Or use site:www.postgresql.org to avoid the archive listings, etc.

== Ezra Epstein

""D. Dante Lorenso"" <da***@lorenso.com> wrote in message
news:3F**************@lorenso.com...
Trying to use the 'search' in the docs section of PostgreSQL.org
is extremely SLOW. Considering this is a website for a database
and databases are supposed to be good for indexing content, I'd
expect a much faster performance.

I submitted my search over two minutes ago. I just finished this
email to the list. The results have still not come back. I only
searched for:

SECURITY INVOKER

Perhaps this should be worked on?

Dante

---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes do not match


---------------------------(end of broadcast)---------------------------
TIP 7: don't forget to increase your free space map settings


Regards,
Oleg
__________________________________________________ ___________
Oleg Bartunov, sci.researcher, hostmaster of AstroNet,
Sternberg Astronomical Institute, Moscow University (Russia)
Internet: ol**@sai.msu.su, http://www.sai.msu.su/~megera/
phone: +007(095)939-16-83, +007(095)939-23-83

---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster

Nov 12 '05 #83
On Mon, 5 Jan 2004, Tom Lane wrote:
Mark Kirkwood <ma****@paradise.net.nz> writes:
These results seem pretty good to me...


FWIW, I see pretty decent search speed when I go to
http://www.postgresql.org/search.cgi
but pretty lousy search speed when I try a similar query at
http://archives.postgresql.org/

Could we get the latter search engine cranked up?


Odd ... I'm doing all my searches using archives ... note that both use
the same backend database, so the only thing I can think of is that when
you use search.cgi, its not including everything on http://archives.*,
which will cut down the # of URLs found by ~300k out of 390k ... smaller
result set ...

----
Marc G. Fournier Hub.Org Networking Services (http://www.hub.org)
Email: sc*****@hub.org Yahoo!: yscrappy ICQ: 7615664

---------------------------(end of broadcast)---------------------------
TIP 8: explain analyze is your friend

Nov 12 '05 #84

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

8
by: bettina | last post by:
I'm re-programming my Website (www.coaster.ch) in PHP and I find it too slow (although I have ADSL). That's more or less how it functions: Here my tables: 'COASTERS' (code of coaster, code of...
12
by: Vjay77 | last post by:
Hi, I haven't posted any problem in quite a while now, but I came to the point that I really need to ask for help. I need to create an application which will search through .txt log file and...
4
by: sommes | last post by:
It's only happen on .asp website, what's the problem? Thank you
2
by: tmb | last post by:
When publishing a website the process is excrutiatingly slow - we are talking 3-4 minutes from when the actual transfer to the site has begun to completion. Apparently i'm not the only one...
2
by: yasmike | last post by:
I am having a problem with my secure website on our internal network. The secure website is hosted on our Windows 2000 Server running IIS 5.0. If you try and access the website from a browser from...
1
by: CloudSolutions | last post by:
Introduction: For many beginners and individual users, requiring a credit card and email registration may pose a barrier when starting to use cloud servers. However, some cloud server providers now...
0
by: Faith0G | last post by:
I am starting a new it consulting business and it's been a while since I setup a new website. Is wordpress still the best web based software for hosting a 5 page website? The webpages will be...
0
by: ryjfgjl | last post by:
In our work, we often need to import Excel data into databases (such as MySQL, SQL Server, Oracle) for data analysis and processing. Usually, we use database tools like Navicat or the Excel import...
0
by: taylorcarr | last post by:
A Canon printer is a smart device known for being advanced, efficient, and reliable. It is designed for home, office, and hybrid workspace use and can also be used for a variety of purposes. However,...
0
by: Charles Arthur | last post by:
How do i turn on java script on a villaon, callus and itel keypad mobile phone
0
by: aa123db | last post by:
Variable and constants Use var or let for variables and const fror constants. Var foo ='bar'; Let foo ='bar';const baz ='bar'; Functions function $name$ ($parameters$) { } ...
0
by: emmanuelkatto | last post by:
Hi All, I am Emmanuel katto from Uganda. I want to ask what challenges you've faced while migrating a website to cloud. Please let me know. Thanks! Emmanuel
0
BarryA
by: BarryA | last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
0
by: Hystou | last post by:
There are some requirements for setting up RAID: 1. The motherboard and BIOS support RAID configuration. 2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.