Greetings groups! I am a rank novice in both C programming and
numerical analysis, so I ask in advance your indulgence. Also, this
question is directed specifically to those familiar with Numerical
Recipes in C (not C++), in practice or at least in theory.
I have taken an interest in the the least-squares SVD alternative to
the Remes algorithm offered in section 5.13 of NR, 2nd ed. (see here, http://www.library.cornell.edu/nr/bookcpdf/c5-13.pdf, for reference).
I own the NR code files (so, yes, I am legal!) and in paying scrupulous
attention to the various file dependencies I have been able to set up
and compile a project in Borland C++ 3.1 (yes, old compiler, but the
code is about as old) that is supposed to demonstrate the ratlsqr
routine. Those who own the same file set will know about the xratlsqr.c
file, which is not in the book. Since the ratslqr routine uses a double
vs float version of the key SVD procedures, I have had to go in and
scrupulously change the headers and relevant varible declarations from
float to double.
Things compile fine, but I run time I feed the example program some
values for a small size test problem, and I get a runtime error
straight out of the nrutil file: "allocation error in matrix()" or, at
times "allocation error in vector()", or something similar.
I know that the routine is having trouble designating the appropriate
memory space for the various vectors and matrices the routine requires,
but on a new fast Intel type machine there should be heap memory to
spare.
I have compiled and run other sample programs that allocate memory for
vectors and matrices according to the NR "wrappers" provided in the
nrutil files, and they work fine. Mind you , they are a little less
complicated (the SVD routines are among the lengthiest and most complex
in NR), and there was no post hoc fiddling around the float to double
issue. I am just wondering if I am gobbling up memory in trying to
allocate space for arrays of double size floats, but in the examples I
am trying to test this should not be too taxing for any modern
computer.
Is this related to the memory model under which I compile? I have tried
the options ranging from tiny to huge, but to no avail!
This is a highly specialized question, I know, and probably I have made
little sense to any but those directly familiar with the routines that
perplex me. But if you are familiar with dynamic memory allocation
issues with NR or, better yet, specifically interested in the very code
in question, maybe you can help demystify me.
Many thanks in advance,
Les 11 3370
lcw1964 wrote:
Greetings groups! I am a rank novice in both C programming and
numerical analysis, so I ask in advance your indulgence. Also, this
question is directed specifically to those familiar with Numerical
Recipes in C (not C++), in practice or at least in theory.
I have taken an interest in the the least-squares SVD alternative to
the Remes algorithm offered in section 5.13 of NR, 2nd ed. (see here, http://www.library.cornell.edu/nr/bookcpdf/c5-13.pdf, for reference).
I own the NR code files (so, yes, I am legal!) and in paying scrupulous
attention to the various file dependencies I have been able to set up
and compile a project in Borland C++ 3.1 (yes, old compiler, but the
code is about as old) that is supposed to demonstrate the ratlsqr
routine. Those who own the same file set will know about the xratlsqr.c
file, which is not in the book. Since the ratslqr routine uses a double
vs float version of the key SVD procedures, I have had to go in and
scrupulously change the headers and relevant varible declarations from
float to double.
Things compile fine, but I run time I feed the example program some
values for a small size test problem, and I get a runtime error
straight out of the nrutil file: "allocation error in matrix()" or, at
times "allocation error in vector()", or something similar.
I know that the routine is having trouble designating the appropriate
memory space for the various vectors and matrices the routine requires,
but on a new fast Intel type machine there should be heap memory to
spare.
I have compiled and run other sample programs that allocate memory for
vectors and matrices according to the NR "wrappers" provided in the
nrutil files, and they work fine. Mind you , they are a little less
complicated (the SVD routines are among the lengthiest and most complex
in NR), and there was no post hoc fiddling around the float to double
issue. I am just wondering if I am gobbling up memory in trying to
allocate space for arrays of double size floats, but in the examples I
am trying to test this should not be too taxing for any modern
computer.
Is this related to the memory model under which I compile? I have tried
the options ranging from tiny to huge, but to no avail!
This is a highly specialized question, I know, and probably I have made
little sense to any but those directly familiar with the routines that
perplex me. But if you are familiar with dynamic memory allocation
issues with NR or, better yet, specifically interested in the very code
in question, maybe you can help demystify me.
Many thanks in advance,
Les
I had similar problems "converting " NR C code to double precision.
Nowadays when I use NR (not very often, find their code hard to follow)
it's the C++ code which is in double precision by default.
My problem could be traced back to all kinds of TINY and EPS variables
whose values had been hard-coded somewhere in their source, but by what
you tell I'm not so sure that's the reason in your case. Perhaps you
could install a more up-to-date compiler like gcc under mingw (I assume
you're using Windows.) Then you could try to run your executable under
valgrind (valgrind.org) to see where in your source the allocation goes
wrong. That may give you a clue.
Hope that helps,
S.
lcw1964 wrote:
Greetings groups! I am a rank novice in both C programming and
numerical analysis, so I ask in advance your indulgence. Also, this
question is directed specifically to those familiar with Numerical
Recipes in C (not C++), in practice or at least in theory.
I have taken an interest in the the least-squares SVD alternative to
the Remes algorithm offered in section 5.13 of NR, 2nd ed. (see here, http://www.library.cornell.edu/nr/bookcpdf/c5-13.pdf, for reference).
I own the NR code files (so, yes, I am legal!) and in paying scrupulous
attention to the various file dependencies I have been able to set up
and compile a project in Borland C++ 3.1 (yes, old compiler, but the
code is about as old) that is supposed to demonstrate the ratlsqr
routine. Those who own the same file set will know about the xratlsqr.c
file, which is not in the book. Since the ratslqr routine uses a double
vs float version of the key SVD procedures, I have had to go in and
scrupulously change the headers and relevant varible declarations from
float to double.
Things compile fine, but I run time I feed the example program some
values for a small size test problem, and I get a runtime error
straight out of the nrutil file: "allocation error in matrix()" or, at
times "allocation error in vector()", or something similar.
I know that the routine is having trouble designating the appropriate
memory space for the various vectors and matrices the routine requires,
but on a new fast Intel type machine there should be heap memory to
spare.
I have compiled and run other sample programs that allocate memory for
vectors and matrices according to the NR "wrappers" provided in the
nrutil files, and they work fine. Mind you , they are a little less
complicated (the SVD routines are among the lengthiest and most complex
in NR), and there was no post hoc fiddling around the float to double
issue. I am just wondering if I am gobbling up memory in trying to
allocate space for arrays of double size floats, but in the examples I
am trying to test this should not be too taxing for any modern
computer.
Is this related to the memory model under which I compile? I have tried
the options ranging from tiny to huge, but to no avail!
This is a highly specialized question, I know, and probably I have made
little sense to any but those directly familiar with the routines that
perplex me. But if you are familiar with dynamic memory allocation
issues with NR or, better yet, specifically interested in the very code
in question, maybe you can help demystify me.
Many thanks in advance,
Les
Forgot to mention that you might also try posting on the nr.com forum.
Likely you're not the only victim.
Hi,
what about netlib/cephes/remes?
Hans M
--------------------------------------------------------------------------------------------------------------------
lcw1964 wrote:
Greetings groups! I am a rank novice in both C programming and
numerical analysis, so I ask in advance your indulgence. Also, this
question is directed specifically to those familiar with Numerical
Recipes in C (not C++), in practice or at least in theory.
I have taken an interest in the the least-squares SVD alternative to
the Remes algorithm offered in section 5.13 of NR, 2nd ed. (see here, http://www.library.cornell.edu/nr/bookcpdf/c5-13.pdf, for reference).
I own the NR code files (so, yes, I am legal!) and in paying scrupulous
attention to the various file dependencies I have been able to set up
and compile a project in Borland C++ 3.1 (yes, old compiler, but the
code is about as old) that is supposed to demonstrate the ratlsqr
routine. Those who own the same file set will know about the xratlsqr.c
file, which is not in the book. Since the ratslqr routine uses a double
vs float version of the key SVD procedures, I have had to go in and
scrupulously change the headers and relevant varible declarations from
float to double.
Things compile fine, but I run time I feed the example program some
values for a small size test problem, and I get a runtime error
straight out of the nrutil file: "allocation error in matrix()" or, at
times "allocation error in vector()", or something similar.
I know that the routine is having trouble designating the appropriate
memory space for the various vectors and matrices the routine requires,
but on a new fast Intel type machine there should be heap memory to
spare.
I have compiled and run other sample programs that allocate memory for
vectors and matrices according to the NR "wrappers" provided in the
nrutil files, and they work fine. Mind you , they are a little less
complicated (the SVD routines are among the lengthiest and most complex
in NR), and there was no post hoc fiddling around the float to double
issue. I am just wondering if I am gobbling up memory in trying to
allocate space for arrays of double size floats, but in the examples I
am trying to test this should not be too taxing for any modern
computer.
Is this related to the memory model under which I compile? I have tried
the options ranging from tiny to huge, but to no avail!
This is a highly specialized question, I know, and probably I have made
little sense to any but those directly familiar with the routines that
perplex me. But if you are familiar with dynamic memory allocation
issues with NR or, better yet, specifically interested in the very code
in question, maybe you can help demystify me.
Many thanks in advance,
Les
It's been a while since I used NRC, but check to make sure that in your
float to double conversion that you are now allocating double arrays as
well.
I also seem to remember that NRC supplies double-precision versions in
the archive they supply.
Damien
lcw1964 wrote:
Greetings groups! I am a rank novice in both C programming and
numerical analysis, so I ask in advance your indulgence. Also, this
question is directed specifically to those familiar with Numerical
Recipes in C (not C++), in practice or at least in theory.
I have taken an interest in the the least-squares SVD alternative to
the Remes algorithm offered in section 5.13 of NR, 2nd ed. (see here, http://www.library.cornell.edu/nr/bookcpdf/c5-13.pdf, for reference).
I own the NR code files (so, yes, I am legal!) and in paying scrupulous
attention to the various file dependencies I have been able to set up
and compile a project in Borland C++ 3.1 (yes, old compiler, but the
code is about as old) that is supposed to demonstrate the ratlsqr
routine. Those who own the same file set will know about the xratlsqr.c
file, which is not in the book. Since the ratslqr routine uses a double
vs float version of the key SVD procedures, I have had to go in and
scrupulously change the headers and relevant varible declarations from
float to double.
Things compile fine, but I run time I feed the example program some
values for a small size test problem, and I get a runtime error
straight out of the nrutil file: "allocation error in matrix()" or, at
times "allocation error in vector()", or something similar.
I know that the routine is having trouble designating the appropriate
memory space for the various vectors and matrices the routine requires,
but on a new fast Intel type machine there should be heap memory to
spare.
I have compiled and run other sample programs that allocate memory for
vectors and matrices according to the NR "wrappers" provided in the
nrutil files, and they work fine. Mind you , they are a little less
complicated (the SVD routines are among the lengthiest and most complex
in NR), and there was no post hoc fiddling around the float to double
issue. I am just wondering if I am gobbling up memory in trying to
allocate space for arrays of double size floats, but in the examples I
am trying to test this should not be too taxing for any modern
computer.
Is this related to the memory model under which I compile? I have tried
the options ranging from tiny to huge, but to no avail!
This is a highly specialized question, I know, and probably I have made
little sense to any but those directly familiar with the routines that
perplex me. But if you are familiar with dynamic memory allocation
issues with NR or, better yet, specifically interested in the very code
in question, maybe you can help demystify me.
Many thanks in advance,
Les
lcw1964 wrote:
Greetings groups! I am a rank novice in both C programming and
numerical analysis, so I ask in advance your indulgence. Also, this
question is directed specifically to those familiar with Numerical
Recipes in C (not C++), in practice or at least in theory.
I have taken an interest in the the least-squares SVD alternative to
the Remes algorithm offered in section 5.13 of NR, 2nd ed. (see here, http://www.library.cornell.edu/nr/bookcpdf/c5-13.pdf, for reference).
I own the NR code files (so, yes, I am legal!) and in paying scrupulous
attention to the various file dependencies I have been able to set up
and compile a project in Borland C++ 3.1 (yes, old compiler, but the
code is about as old) that is supposed to demonstrate the ratlsqr
routine. Those who own the same file set will know about the xratlsqr.c
file, which is not in the book. Since the ratslqr routine uses a double
vs float version of the key SVD procedures, I have had to go in and
scrupulously change the headers and relevant varible declarations from
float to double.
Things compile fine, but I run time I feed the example program some
values for a small size test problem, and I get a runtime error
straight out of the nrutil file: "allocation error in matrix()" or, at
times "allocation error in vector()", or something similar.
I know that the routine is having trouble designating the appropriate
memory space for the various vectors and matrices the routine requires,
but on a new fast Intel type machine there should be heap memory to
spare.
I have compiled and run other sample programs that allocate memory for
vectors and matrices according to the NR "wrappers" provided in the
nrutil files, and they work fine. Mind you , they are a little less
complicated (the SVD routines are among the lengthiest and most complex
in NR), and there was no post hoc fiddling around the float to double
issue. I am just wondering if I am gobbling up memory in trying to
allocate space for arrays of double size floats, but in the examples I
am trying to test this should not be too taxing for any modern
computer.
Is this related to the memory model under which I compile? I have tried
the options ranging from tiny to huge, but to no avail!
This is a highly specialized question, I know, and probably I have made
little sense to any but those directly familiar with the routines that
perplex me. But if you are familiar with dynamic memory allocation
issues with NR or, better yet, specifically interested in the very code
in question, maybe you can help demystify me.
Many thanks in advance,
Les
While your tests are probably not too taxing for any modern operating
system, they probably ARE too taxing for an old C/C++ compiler which
employs a segmented memory architecture. What you need is a compiler
that supports a flat memory model....and uses disk caching when the
size of your problem exceeds available RAM.
Use the GNU gcc compiler and these run-time errors will disappear.
~Glynne
Thanks, this is something I should try.
I must confess that I am Unix/Linux naive and am enslaved to the
Windows OS, so wrapping my brain around an "emulator" interface like
Cygwin or MinGW is a first step. I have been looking into the former,
but must confess at being utterly perplexed about what to do with the
various .gz files that get downloaded from whatever mirror I have been
lucky to connect with.
On the other hand, the routine that interests me is readily recast in
either Maple or Mathematica. (I am more experienced with the former.)
The computation would be slower, but I would have the advantage of
arbitrary floating point precision and the ability to output
mathematical results in a visually meaningul way--matrices, formulae,
etc.--and be able to manipulate such things easily in the CAS
environment.
I will let you know how far I get.
Les
~Glynne wrote:
>
Use the GNU gcc compiler and these run-time errors will disappear.
~Glynne
Damien wrote:
It's been a while since I used NRC, but check to make sure that in your
float to double conversion that you are now allocating double arrays as
well.
I thought I had the issue covered but it is feasible I missed
something.
I also seem to remember that NRC supplies double-precision versions in
the archive they supply.
Regretably, not in the archive I have, version 2.08h which I got in the
late 1990s. Headers to double-precision code versions are in nr.h, but
not coverted code itself.
In NRC++ all code is DP by default, but there is a whole extra level of
interdependcies and objects and type definitions, etc.
This may be scurrilous to say here, but I really wish the NR people had
released a 2nd of the Pascal edition.
Les
lcw1964 wrote:
>
I will let you know how far I get.
I gave up on cygwin, but a barebones download and installation of MinGW
has been a good place to start.
I have been able to compile the sample program in the documentation and
once I learn how to compile and link multifile projects I will try to
crack this nut.
Thanks for the tip. As a rank novice I have avoid command line
compilers like the place, but if this is free and it works, it is about
time I learned!
Les
~Glynne wrote:
>
Use the GNU gcc compiler and these run-time errors will disappear.
I gave up on deciphering the pseudo-Unix interface of MSYW with MinGW
and tried Dev C++ instead. Acts and looks a lot like BC++ or MSVC++,
with which I am familiar.
Code compiles fine. Same runtime allocation error occurs.
So at least now I am more satisfied this something within the code and
my attempted "tweaking" of it, and not just an artifact of an ancient
compiler.
Thanks for the tip. What I may need to do is cut and paste only those
headers and utility routines the code actually uses, rather than just
linking in the whole darn files. This has helped me get rid of compile
error in the past, but this is the first time I have ever had to deal
with a runtime error.
Les This thread has been closed and replies have been disabled. Please start a new discussion. Similar topics |
by: Mike Meyer |
last post by:
PEP: XXX
Title: A rational number module for Python
Version: $Revision: 1.4 $
Last-Modified: $Date: 2003/09/22 04:51:50 $
Author: Mike Meyer <mwm@mired.org>
Status: Draft
Type: Staqndards
Content-Type: text/x-rst
Created: 16-Dec-2004
Python-Version: 2.5
|
by: Mike Meyer |
last post by:
This version includes the input from various and sundry people. Thanks
to everyone who contributed.
<mike
PEP: XXX
Title: A rational number module for Python
Version: $Revision: 1.4 $
Last-Modified: $Date: 2003/09/22 04:51:50 $
Author: Mike Meyer <mwm@mired.org>
|
by: Brian van den Broek |
last post by:
Hi all,
I guess it is more of a maths question than a programming one, but it
involves use of the decimal module, so here goes:
As a self-directed learning exercise I've been working on a script to
convert numbers to arbitrary bases. It aims to take any of whole
numbers (python ints, longs, or Decimals), rational numbers (n / m n,
m whole) and floating points (the best I can do for reals), and
convert them to any base between 2 and...
|
by: Edward Hua |
last post by:
Hi,
I'm wondering if anybody has ever copied the quicksort algorithm from
the book Numerical Recipes in C: The Art of Scientific Computing (2nd
ed.), by Press, Teukolsky, Vetterling, and Flannery, in Chapter 8.
Quicsort is supposed to sort an array of, say, doubles, in ascending
order.
I tried to copy this algorithm line by line to test it. It seems that
there's an error in this algorithm as given in this book. Namely, the
|
by: Martin Jørgensen |
last post by:
Hi,
I've made a program from numerical recipes. Looks like I'm not allowed
to distribute the source code from numerical recipes but it shouldn't
even be necessary to do that.
My problem is that I'm not very experienced with pointers, pointers to
pointers and the like and I got 4 compiler warnings + I don't completely
understand how to build this "compact matrix" (see later).
| |
by: Abhi |
last post by:
Hi..
I wanted the C source code in machine readable format for the book
"Numerical Recipes in C".
I got hold of the pdf version of the book somehow. Does anyone have the
complete C code of the book?. If yes,..can you please mail me the code
or somehow share it with me?.
With Regards,
Abhishek S
|
by: CBFalconer |
last post by:
Here is an amusing toy, which may be of especial interest to the
Forthians who use rational fractions to represent real numbers.
The handling of criterion may also be interesting to some.
/* Find best rational approximation to a double */
/* by C.B. Falconer, 2006-09-07. Released to public domain */
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
|
by: Babak |
last post by:
Hi,
I've developed a C program which contains a large number of vectors
and matrices operations. Throughout my code, I used the template from
the Numerical Recipes book to define vectors and matrices and access
their elements. For example, to define a vector I used the function:
my_vector=vector(0,n-1);
Which actually allocate the memory as follows:
|
by: Giovanni Gherdovich |
last post by:
Hello,
I need to perform a two-dimensional
numerical integration, i.e. my functions are
R^2 -R.
All the tool I'm aware of provide routines for
single-variable numerical integration, so I'm
about to write a "wrapper" wich uses a
1-D method "two times", applying Fubini's Theorem
|
by: Hystou |
last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can effortlessly switch the default language on Windows 10 without reinstalling. I'll walk you through it.
First, let's disable language synchronization. With a Microsoft account, language settings sync across devices. To prevent any complications,...
|
by: jinu1996 |
last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven tapestry of website design and digital marketing. It's not merely about having a website; it's about crafting an immersive digital experience that captivates audiences and drives business growth.
The Art of Business Website Design
Your website is...
| |
by: tracyyun |
last post by:
Dear forum friends,
With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each protocol has its own unique characteristics and advantages, but as a user who is planning to build a smart home system, I am a bit confused by the choice of these technologies. I'm particularly interested in Zigbee because I've heard it does some...
|
by: isladogs |
last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM).
In this session, we are pleased to welcome a new presenter, Adolph Dupré who will be discussing some powerful techniques for using class modules.
He will explain when you may want to use classes instead of User Defined Types (UDT). For example, to manage the data in unbound forms.
Adolph will...
|
by: conductexam |
last post by:
I have .net C# application in which I am extracting data from word file and save it in database particularly. To store word all data as it is I am converting the whole word file firstly in HTML and then checking html paragraph one by one.
At the time of converting from word file to html my equations which are in the word document file was convert into image.
Globals.ThisAddIn.Application.ActiveDocument.Select();...
|
by: TSSRALBI |
last post by:
Hello
I'm a network technician in training and I need your help.
I am currently learning how to create and manage the different types of VPNs and I have a question about LAN-to-LAN VPNs.
The last exercise I practiced was to create a LAN-to-LAN VPN between two Pfsense firewalls, by using IPSEC protocols.
I succeeded, with both firewalls in the same network. But I'm wondering if it's possible to do the same thing, with 2 Pfsense firewalls...
|
by: adsilva |
last post by:
A Windows Forms form does not have the event Unload, like VB6. What one acts like?
|
by: muto222 |
last post by:
How can i add a mobile payment intergratation into php mysql website.
| |
by: bsmnconsultancy |
last post by:
In today's digital era, a well-designed website is crucial for businesses looking to succeed. Whether you're a small business owner or a large corporation in Toronto, having a strong online presence can significantly impact your brand's success. BSMN Consultancy, a leader in Website Development in Toronto offers valuable insights into creating effective websites that not only look great but also perform exceptionally well. In this comprehensive...
| |