473,769 Members | 8,096 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

BIG successes of Lisp (was ...)

In the context of LATEX, some Pythonista asked what the big
successes of Lisp were. I think there were at least three *big*
successes.

a. orbitz.com web site uses Lisp for algorithms, etc.
b. Yahoo store was originally written in Lisp.
c. Emacs

The issues with these will probably come up, so I might as well
mention them myself (which will also make this a more balanced
post)

a. AFAIK Orbitz frequently has to be shut down for maintenance
(read "full garbage collection" - I'm just guessing: with
generational garbage collection, you still have to do full
garbage collection once in a while, and on a system like that
it can take a while)

b. AFAIK, Yahoo Store was eventually rewritten in a non-Lisp.
Why? I'd tell you, but then I'd have to kill you :)

c. Emacs has a reputation for being slow and bloated. But then
it's not written in Common Lisp.

Are ViaWeb and Orbitz bigger successes than LATEX? Do they
have more users? It depends. Does viewing a PDF file made
with LATEX make you a user of LATEX? Does visiting Yahoo
store make you a user of ViaWeb?

For the sake of being balanced: there were also some *big*
failures, such as Lisp Machines. They failed because
they could not compete with UNIX (SUN, SGI) in a time when
performance, multi-userism and uptime were of prime importance.
(Older LispM's just leaked memory until they were shut down,
newer versions overcame that problem but others remained)

Another big failure that is often _attributed_ to Lisp is AI,
of course. But I don't think one should blame a language
for AI not happening. Marvin Mins ky, for example,
blames Robotics and Neural Networks for that.
Jul 18 '05
303 17748
mi*****@ziplip. com wrote in message news:<BY******* *************** *************** *@ziplip.com>.. .
b. AFAIK, Yahoo Store was eventually rewritten in a non-Lisp.
Why? I'd tell you, but then I'd have to kill you :)


From a new footnote in Paul Graham's ``Beating The Averages'' article:

In January 2003, Yahoo released a new version of the editor written
in C++ and Perl. It's hard to say whether the program is no longer
written in Lisp, though, because to translate this program into C++
they literally had to write a Lisp interpreter: the source files of
all the page-generating templates are still, as far as I know, Lisp
code. (See Greenspun's Tenth Rule.)
Jul 18 '05 #31
In article <9a************ **@news1.telusp lanet.net>,
Wade Humeniuk <wh******@nospa mtelus.net> wrote:
Terry Reedy wrote:
My contemporaneous impression, correct or not, as formed from
miscellaneous mentions in the computer press and computer shows, was
that they were expensive, slow, and limited -- limited in the sense of
being specialized to running Lisp, rather than any language I might
want to use. I can understand that a dedicated Lisper would not
consider Lisp-only to be a real limitation, but for the rest of us...


Well its not true. Symbolics for one supported additional languages,
and I am sure others have pointed out that are C compilers for
the Lisp Machines.

See

http://kogs-www.informatik.uni-hambu...h-summary.html

Section: Other Languages

It says that Prolog, Fortran and Pascal were available.

Wade


ADA also.

Actually using an incremental C compiler and running C on type- and bounds-checking
hardware - like on the Lisp Machine - is not that a bad idea.
A whole set of problems disappears.
Jul 18 '05 #32
John J. Lee wrote:
Alex Martelli <al***@aleax.it > writes:
[...]
> Whether a system is intelligent must be determined by the result. When
> you feed a chess configuration to the big blue computer, which any
> average player of chess would make a move that will guarantee
> checkmate, but the Deep Blue computer gives you a move that will lead
> to stalemate, you know that it is not very intelligent (it did happen).


That may be your definition. It seems to me that the definition given
by a body comprising a vast numbers of experts in the field, such as
the AAAI, might be considered to carry more authority than yours.


Argument from authority has rarely been less useful or interesting
than in the case of AI, where both our abilities and understanding are
so poor, and where the embarrassing mystery of conciousness lurks
nearby...


It's a problem of terminology, no more, no less. We are dissenting on
what it _means_ to say that a system is about AI, or not. This is no
different from discussing what it means to say that a pasta sauce is
carbonara, or not. Arguments from authority and popular usage are
perfectly valid, and indeed among the most valid, when discussing what given
phrases or words "MEAN" -- not "mean to you" or "mean to Joe Blow"
(which I could care less about), but "mean in reasonable discourse".

And until previously-incorrect popular usage has totally blown away by
sheer force of numbers a previously-correct, eventually-pedantic
meaning (as, alas, inevitably happens sometimes, given the nature of
human language), I think it's indeed worth striving a bit to preserve the
"erudite" (correct by authority and history of usage) meaning against
the onslaught of the "popular" (correct by sheer force of numbers) one.
Sometimes (e.g., the jury's still out on "hacker") the "originally correct"
meaning may be preserved or recovered.

One of the best ways to define what term X "means", when available, is
to see what X means _to people who self-identify as [...] X_ (where the
[...] may be changed to "being", or "practising ", etc, depending on X's
grammatical and pragmatical role). For example, I think that when X
is "Christian" , "Muslim", "Buddhist", "Atheist", etc, it is best to
ascertain the meaning of X with reference to the meaning ascribed to X by
people who practice the respective doctrines: in my view of the world, it
better defines e.g. the term "Buddhist" to see how it's used, received,
meant, by people who _identify as BEING Buddhist, as PRACTISING
Buddhism_, rather than to hear the opinions of others who don't. And,
here, too, Authority counts: the opinion of the Pope on the meaning of
"Catholicis m" matter a lot, and so do those of the Dalai Lama on the
meaning of "Buddhism", etc. If _I_ think that a key part of the meaning
of "Catholicis m" is eating a lot of radishes, that's a relatively harmless
eccentricity of mine with little influence on the discourse of others; if
_the Pope_ should ever hold such a strange opinion, it will have
inordinately vaster effect.

Not an effect limited to religions, by any means. What Guido van Rossum
thinks about the meaning of the word "Pythonic", for example, matters a lot
more, to ascertain the meaning of that word, than the opinion on the same
subject, if any, of Mr Joe Average of Squedunk Rd, Berkshire, NY. And
similarly for terms related to scientific, professional, and technological
disciplines, such as "econometri cs", "surveyor", AND "AI".

The poverty of our abilities and understanding, and assorted lurking
mysteries, have as little to do with the best approach to ascertain the
definition of "AI", as the poverty of our tastebuds, and the risk that the
pasta is overcooked, have to do with the best approach to ascertain the
definition of "sugo alla carbonara". Definitions and meanings are about
*human language*, in either case.
Alex

Jul 18 '05 #33
Alex Martelli <al***@aleax.it > schreef:
But just about any mathematical theory is an abstraction of "mechanisms
underlying thought": unless we want to yell "AI" about any program doing
computation (or, for that matter, memorizing information and fetching it
back again, also simplified versions of "mechanisms underlying thought"),
this had better be a TAD more nuanced. I think a key quote on the AAAI
pages is from Grosz and Davis: "computer systems must have more than
processing power -- they must have intelligence". This should rule out
from their definition of "intelligen ce" any "brute-force" mechanism
that IS just processing power.


Maybe intelligence IS just processing power, but we don't know (yet) how
all these processes in the human brain[*] work... ;-)
[*] Is there anything else considered "intelligen t"?

--
JanC

"Be strict when sending and tolerant when receiving."
RFC 1958 - Architectural Principles of the Internet - section 3.9
Jul 18 '05 #34
On Tue, 14 Oct 2003 13:07:23 -0400, Francis Avila wrote:
"Christian Lynbech" <ch************ ***@ericsson.co m> wrote in message
news:of******** ****@situla.ted .dk.eu.ericsson .se...
>>>>> "mike420" == mike420 <mi*****@ziplip .com> writes:
It is still a question of heated debate what actually killed the lisp
machine industry.

I have so far not seen anybody dipsuting that they were a marvel of
technical excellence, sporting stuff like colour displays, graphical
user interfaces and laser printers way ahead of anybody else.

I think what helped kill the lisp machine was probably lisp: many people
just don't like lisp, because it is a very different way of thinking that
most are rather unaccustomed to. Procedural, imperative programming is
simply a paradigm that more closely matches the ordinary way of thinking
(ordinary = in non-programming, non-computing spheres of human endevor) than
functional programming. As such, lisp machines were an oddity and too
Rubbish. Lisp is *not* a functional programming language; and Lisp
Machines also had C and Ada and Pascal compilers (and maybe others)
different for many to bother, and it was easy for them to come up with
excuses not to bother (so that the 'barrier of interest,' so to speak, was
higher.) Lisp, the language family (or whatever you want to call it), still
has this stigma: lambda calculus is not a natural way of thinking.
Lisp has almost nothing at all to do with the lambda calculus.
[McCarthy said he looked at Church's work, got the name "lambda" from
there, and couldn't understand anything else :-) So that's about all
the relationship there is -- one letter! Python has twice as much in
common with INTERCAL! :-)]
This isn't to make a value judgment, but I think it's an important thing
that the whole "functional/declarative v. procedural/OO" debate overlooks.


How does that have anything to do with Lisp, even if true? Lisp is,
if anything, more in the "procedural/OO" camp than the
"functional/declarative" camp. [Multiple inheritance was invented in
Lisp (for the Lisp Machine window system); ANSI CL was the first
language to have an OO standard; ...]

There's nothing wrong with functional languages, anyway (Lisp just
isn't one; try Haskell!)
[much clueless ranting elided]

--
Cogito ergo I'm right and you're wrong. -- Blair Houghton

(setq reply-to
(concatenate 'string "Paul Foley " "<mycroft" '(#\@) "actrix.gen.nz> "))
Jul 18 '05 #35
John Thingstad <jo************ @chello.no> writes:
You all seem to forget www.google.com
One of the most used distributed applications in the world.
Written in Common Lisp (xanalysis)


Nah, google runs on 10K (yes, ten thousand) computers and is written
in C++. Norvig in his talk at the last Lisp conference explained all
this. He said this gave them the ability to patch and upgrade without
taking the whole system down --- they'd just do it
machine-by-machine.

--
Fred Gilham gi****@csl.sri. com
The density of a textbook must be inversely proportional to the
density of the students using it. --- Dave Stringer-Calvert
Jul 18 '05 #36
JanC wrote:
Alex Martelli <al***@aleax.it > schreef:
But just about any mathematical theory is an abstraction of "mechanisms
underlying thought": unless we want to yell "AI" about any program doing
... Maybe intelligence IS just processing power, but we don't know (yet) how
all these processes in the human brain[*] work... ;-)

[*] Is there anything else considered "intelligen t"?


Depends on who you ask -- e.g., each of dolphins, whales, dogs, cats,
chimpanzees, etc, are "considered intelligent" by some but not all people.
Alex

Jul 18 '05 #37
On Tue, 14 Oct 2003 16:58:45 GMT, Alex Martelli <al***@aleax.it >
wrote:
Isaac To wrote:
>>> "Alex" == Alex Martelli <al***@aleax.it > writes:
Alex> Chess playing machines such as Deep Blue, bridge playing
programs Alex> such as GIB and Jack ..., dictation-taking programs
such as those Alex> made by IBM and Dragon Systems in the '80s (I
don't know if the Alex> technology has changed drastically since I
left the field then, Alex> though I doubt it), are based on
brute-force techniques, and their Alex> excellent performance comes
strictly from processing power.

Nearly no program would rely only on non-brute-force techniques. On the


Yes, the temptation to be clever and cut corners IS always there...;-).


It's an essential part of an intelligent (rather than perfect)
solution.

The only one of the programs above where I know the approach taken is
Deep Blue. A perfect search solution is still not viable for chess,
and unlikely to be for some time. Deep Blue therefore made extensive
use of heuristics to make the search viable - much the same as any
chess program.

A heuristic is by definition fallible, but without heuristics the
seach cannot be viable at all.

So far as I can see, this is essentially the approach that human chess
players take - a mixture of learned heuristics and figuring out the
consequences of their actions several moves ahead (ie searching). One
difference is that the balance in humans is much more towards
heuristics. Another is that human heuristics tend to be much more
sophisticated (and typically impossible to state precisely in words)
because of the way the brain works. Finally, there is intuition - but
as I believe that intuition is basically the product of unconscious
learned (and in some contexts innate) heuristics and procedural memory
applied to information processing, it really boils down to the same
thing - a heuristic search.

The human and computer approaches (basically their sets of heuristics)
differ enough that the computer may do things that a human would
consider stupid, yet the computer still beat the human grandmaster
overall.

Incidentally, I learned more about heuristics from psychology
(particularly social psychology) than I ever did from AI. You see,
like Deep Blue, we have quite a collection of innate, hard-wired
heuristics. Or at least you lot do - I lost a lot of mine by being
born with Asperger syndrome.

No, it wasn't general ability that I lost by the neural damage that
causes Asperger syndrome. I still have the ability to learn. My IQ is
over 100. But as PET and MRI studies of people with Asperger syndrome
reveal, my social reasoning has to be handled by the general
intelligence area of my brain instead of the specialist social
intelligence area that neurotypical people use. What makes the social
intelligence area distinct in neurotypicals? To me, the only answer
that makes sense is basically that innate social heuristics are held
in the social intelligence area.

If the general intelligence region can still take over the job of
social intelligence when needed, the basic 'algorithms' can't be
different. But domain-specific heuristics aren't so fundamental that
you can't make do without them (as long as you can learn some new
ones, presumably).

What does a neural network do under training? Basically it searches
for an approximation of the needed function - a heuristic. Even the
search process itself uses heuristics - "a better approximation may be
found by combining two existing good solutions" for recombination in
genetic algorithms, for instance.

Now consider the "Baldwin Effect", described in Steven Pinkers "How
the mind works"...

"""
So evolution can guide learning in neural networks. Surprisingly,
learning can guide evolution as well. Remember Darwin's discussion of
"the incipient stages of useful structures" - the
what-good-is-half-an-eye problem. The neural-network theorists
Geoffrey Hinton and Steven Nowlan invented a fiendish example. Imagine
an animal controlled by a neural network with twenty connections, each
either excitatory (on) or neutral (off). But the network is utterly
useless unless all twenty connections are correctly set. Not only is
it no good to have half a network; it is no good to have ninety-five
percent of one. In a population of animals whose connections are
determined by random mutation, a fitter mutant, with all the right
connections, arises only about once every million (220) genetically
distinct organisms. Worse, the advantage is immediately lost if the
animal reproduces sexually, because after having finally found the
magic combination of weights, it swaps half of them away. In
simulations of this scenario, no adapted network ever evolved.

But now consider a population of animals whose connections can come in
three forms: innately on, innately off, or settable to on or off by
learning. Mutations determine which of the three possibilities (on,
off, learnable) a given connection has at the animal's birth. In an
average animal in these simulations, about half the connections are
learnable, the other half on or off. Learning works like this. Each
animal, as it lives its life, tries out settings for the learnable
connections at random until it hits upon the magic combination. In
real life this might be figuring out how to catch prey or crack a nut;
whatever it is, the animal senses its good fortune and retains those
settings, ceasing the trial and error. From then on it enjoys a higher
rate of reproduction. The earlier in life the animal acquires the
right settings, the longer it will have to reproduce at the higher
rate.

Now with these evolving learners, or learning evolvers, there is an
advantage to having less than one hundred percent of the correct
network. Take all the animals with ten innate connections. About one
in a thousand (210) will have all ten correct. (Remember that only one
in a million nonlearning animals had all twenty of its innate
connections correct.) That well-endowed animal will have some
probability of attaining the completely correct network by learning
the other ten connections; if it has a thousand occasions to learn,
success is fairly likely. The successful animal will reproduce
earlier, hence more often. And among its descendants, there are
advantages to mutations that make more and more of the connections
innately correct, because with more good connections to begin with, it
takes less time to learn the rest, and the chances of going through
life without having learned them get smaller. In Hinton and Nowlan's
simulations, the networks thus evolved more and more innate
connections. The connections never became completely innate, however.
As more and more of the connections were fixed, the selection pressure
to fix the remaining ones tapered off, because with only a few
connections to learn, every organism was guaranteed to learn them
quickly. Learning leads to the evolution of innateness, but not
complete innateness.
"""

A human brain is not so simple, but what this says to me is (1) that
anything that allows a person to learn important stuff (without
damaging flexibility) earlier in life should become innate, at least
to a degree, and (2) that learning should work together with
innateness - there is no hard divide (some aspects of neurological
development suggest this too). So I would expect some fairly fixed
heuristics (or partial heuristics) to be hard wired, and I figure
autism and Asperger syndrome are fairly big clues as to what is
innate. Stuff related to nonverbal communication such as body
language, for instance, and a tendency to play activities that teach
social stuff in childhood.

And given that these connectionist heuristics cannot be stated in
rational terms, they must basically form subconscious processes which
generate intuitions. Meaning that the subconscious provides the most
likely solutions to a problem, with conscious rational thought only
handling the last 5% ish of figuring it out. Which again fits my
experience - when I'm sitting there looking stupid and people say 'but
obviously it's z' and yes, it obviously is - but how did they rule out
a, b, c, d, e, f and so on and so forth so damned quick? Maybe they
didn't. Maybe their subconsious heuristics suggested a few 'obvious'
answers to consider, so they never had to think about a, b, c...
Human intelligence is IMO not so hugely different from Deep Blue, at
least in principle.
no understanding, no semantic modeling.
no concepts, no abstractions.
Sounds a bit like intuition to me. Of course it would be nice if
computers could invent a rationalisation , the way that human brains
do.

What? I hear you say...

When people have had their corpus callosum cut, so the left and right
cerebral cortex cannot communicate directly, you can present an
instruction to one eye ('go get a coffee' for instance) and the
instruction will be obeyed. But ask why to the other eye, and you get
an excuse (such as 'I'm thirsty') with no indication of any awareness
that the request was made.

You can even watch action potentials in the brain and predict peoples
actions from them. In fact, if you make a decision in less than a
couple of seconds, you probably never thought about it at all -
whatever you may remember. The rationalisation was made up after the
fact, and 'backdated' in your memory.

Hmmm - how long does it take to reply to someone in a typical
conversation...

Actually, it isn't quite so bad - the upper end of this is about 3
seconds IIRC, but the absolute lower end is something like half a
second. You may still have put some thought into it in advance down to
that timescale (though not much, obviously).

Anyway, why?

Well, the textbooks don't tend to explain why, but IMO there is a
fairly obvious explanation. We need excuses to give to other people to
explain our actions. That's easiest if we believe the excuses
ourselves. But if the actions are suggested by subconscious
'intuition' processes, odds are there simply isn't a rational line of
reasoning that can be put into words. Oh dear - better make one up
then!

Again, that't not AAAI's claim, as I read their page. If a problem
can be solved by brute force, it may still be interesting to "model
the thought processes" of humans solving it and implement _those_ in
a computer program -- _that_ program would be doing AI, even for a
problem not intrinsically "requiring" it -- so, AI is not about what
problem is being solved, but (according to the AAAI as I read them)
also involves considerations about the approach.


1. This suggests that the only human intelligence is human
intelligence. A very anthropocentric viewpoint.

2. Read some cognitive neuroscience, some social psychology,
basically whatever you can get your hands on that has cognitive
leanings (decent textbooks - not just pop psychology) - and
then tell me that the human mind doesn't use what you call brute
force methods.
--
Steve Horne

steve at ninereeds dot fsnet dot co dot uk
Jul 18 '05 #38
larry wrote:
"Francis Avila" <fr***********@ yahoo.com> wrote in message
UD: things
Gxyz: x is baked at y degrees for z minutes.
Hx: x is a cake.
Ix: x is batter.

For all x, ( (Ix & Gx(350)(45)) > Hx )

(i.e. "Everything that's a batter and put into a 350 degree oven for 45
minutes is a cake")

...instead of...

1. Heat the oven to 350 degrees.
2. Place batter in oven.
3. Bake 45 minutes
4. Remove cake from oven.

(i.e. "To make a cake, bake batter in a 350 degree oven for 45 minutes")


This is a great opportunity to turn this into a thread where people swap yummy
recipes. :-) In any case, that's more constructive that all that "language X
is better than language Y" drivel. And it tastes better too. :-) So I start
off with my mom's potato salad:

Ingredients:
potatoes (2 kilos)
meat (1 pound; this should be the kind of beef that comes in little threads)
little silver onions
small pickles
pepper, salt, mayonnaise, mustard

Boil potatoes. Cut meat, potatoes, onions and pickles in little pieces, and
mix everything. Add pepper, salt. Mix with mayonnaise. Add a tiny bit of
mustard. Put in freezer for one day.

There are many variants of this; some people add apple, vegetables, etc.

So, what is YOUR favorite recipe? (Parentheses soup doesn't count.)

Hungrily y'rs,

--
Hans (ha**@zephyrfal con.org)
http://zephyrfalcon.org/

Jul 18 '05 #39
Stephen Horne <$$$$$$$$$$$$$$ $$$@$$$$$$$$$$$ $$$$$$$$$.co.uk > wrote:

Now consider the "Baldwin Effect", described in Steven Pinkers "How
the mind works"...
<SNIP>
A human brain is not so simple, but what this says to me is (1) that
anything that allows a person to learn important stuff (without
damaging flexibility) earlier in life should become innate, at least
to a degree, and (2) that learning should work together with
innateness - there is no hard divide (some aspects of neurological
development suggest this too). So I would expect some fairly fixed
heuristics (or partial heuristics) to be hard wired, and I figure
autism and Asperger syndrome are fairly big clues as to what is
innate. Stuff related to nonverbal communication such as body
language, for instance, and a tendency to play activities that teach
social stuff in childhood.


Sometimes I wonder whether Neandertals (a now extinct human
subspecies) would be more relying on innate knowledge than modern Homo
Sapiens. Maybe individual Neandertals were slightly more intelligent
than modern Sapiens but since they were not so well organized (they
rather made their own fire than using one common fire for the whole
clan) evolutionary pressure gradually favored modern Sapiens.

Looking at it from this perspective the complete modern human
population suffers from Aspergers syndrome on a massive scale, while a
more natural human species is now sadly extinct. A great loss, and
something to grieve about instead of using them in unfavorable
comparisons to modern society, which I find a despicable practice.

However let the Aspergers, and maybe later robots (no innate knowledge
at all?) inherit the Earth, whether programmed with Pythons Neandertal
like bag of unchangeable tricks or with Lisp-like adaptable Aspergers
programming.

Anton

Jul 18 '05 #40

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

73
8069
by: RobertMaas | last post by:
After many years of using LISP, I'm taking a class in Java and finding the two roughly comparable in some ways and very different in other ways. Each has a decent size library of useful utilities as a standard portable part of the core language, the LISP package, and the java.lang package, respectively. Both have big integers, although only LISP has rationals as far as I can tell. Because CL supports keyword arguments, it has a wider range...
699
34155
by: mike420 | last post by:
I think everyone who used Python will agree that its syntax is the best thing going for it. It is very readable and easy for everyone to learn. But, Python does not a have very good macro capabilities, unfortunately. I'd like to know if it may be possible to add a powerful macro system to Python, while keeping its amazing syntax, and if it could be possible to add Pythonistic syntax to Lisp or Scheme, while keeping all of the...
34
2686
by: nobody | last post by:
This article is posted at the request of C.W. Yang who asked me to detail my opinion of Lisp, and for the benefit of people like him, who may find themselves intrigued by this language. The opinions expressed herein are my personal ones, coming from several years of experience with Lisp. I did plenty of AI programming back in the day, which is what would now be called "search" instead.
82
5380
by: nobody | last post by:
Howdy, Mike! mikecoxlinux@yahoo.com (Mike Cox) wrote in message news:<3d6111f1.0402271647.c20aea3@posting.google.com>... > I'm a C++ programmer, and have to use lisp because I want to use > emacs. I've gotten a book on lisp, and I must say lisp is the ugliest > looking language syntax wise. What is up with this: (defun(foo()). (DEFUN FOO () NIL) > What were the lisp authors thinking? Why did Stallman use lisp in
852
28681
by: Mark Tarver | last post by:
How do you compare Python to Lisp? What specific advantages do you think that one has over the other? Note I'm not a Python person and I have no axes to grind here. This is just a question for my general education. Mark
0
9414
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can effortlessly switch the default language on Windows 10 without reinstalling. I'll walk you through it. First, let's disable language synchronization. With a Microsoft account, language settings sync across devices. To prevent any complications,...
1
9977
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows Update option using the Control Panel or Settings app; it automatically checks for updates and installs any it finds, whether you like it or not. For most users, this new feature is actually very convenient. If you want to control the update process,...
0
9848
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each protocol has its own unique characteristics and advantages, but as a user who is planning to build a smart home system, I am a bit confused by the choice of these technologies. I'm particularly interested in Zigbee because I've heard it does some...
0
8860
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing, and deployment—without human intervention. Imagine an AI that can take a project description, break it down, write the code, debug it, and then launch it, all on its own.... Now, this would greatly impact the work of software developers. The idea...
1
7391
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome a new presenter, Adolph Dupré who will be discussing some powerful techniques for using class modules. He will explain when you may want to use classes instead of User Defined Types (UDT). For example, to manage the data in unbound forms. Adolph will...
0
5293
by: TSSRALBI | last post by:
Hello I'm a network technician in training and I need your help. I am currently learning how to create and manage the different types of VPNs and I have a question about LAN-to-LAN VPNs. The last exercise I practiced was to create a LAN-to-LAN VPN between two Pfsense firewalls, by using IPSEC protocols. I succeeded, with both firewalls in the same network. But I'm wondering if it's possible to do the same thing, with 2 Pfsense firewalls...
0
5432
by: adsilva | last post by:
A Windows Forms form does not have the event Unload, like VB6. What one acts like?
1
3947
by: 6302768590 | last post by:
Hai team i want code for transfer the data from one system to another through IP address by using C# our system has to for every 5mins then we have to update the data what the data is updated we have to send another system
2
3549
muto222
by: muto222 | last post by:
How can i add a mobile payment intergratation into php mysql website.

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.