Hi,
I want to create 4 dimensional array, in that first three dimenstional are fixed size and the final index will be on 0 to N-numbers.
For eg, double array[500][25][10][<NOT FIXED>].. So I cant create statically, because the index size are more. Also I have tried 4 dimenstional vector, but its giving 2 problem.
(i) I am storing 4th dimenstion size is more than vector[0][0][0].max_size()
(ii) Storing and Retrieving its more time in vector
So Please let me know, if any other solution to store large array which is 3 index is FIXED and final one is not FIXED?
Looking for answer from anyone..
Thanks.
19 9111
Hi,
Thanks for your reply.
Can you some example or link to create the own array on the heap?
The example is in the article linked to in my post #2.
Banfa 9,065
Expert Mod 8TB
On the whole if you have optimisation switched on storing and retrieving from a vector is the same as for an array.
Hi,
Thanks for your answer.
double array[500][25][10][<NOT FIXED>].
I am achieving this using vector(vector(vector(vector(double)))) dBuffer(....);
As of now I am using vector[500][25][10][UnknownSize].
UnknownSize will be decided on run time. It is working fine.
But I am facing 2 problems.
(i) I am storing 4th dimenstion size is more than
vector[0][0][0].max_size()
(ii) Storing and Retrieving its taking more time in vector for large index.
Pls suggest any other method.
On my system, a vector<double> can hold 62 million doubles. You have more than that?
Hi, I could able to store in my double vector is = 536870911..
But I need more than this index.
How many doubles do you need?
Banfa 9,065
Expert Mod 8TB
Given that a double is 8 bytes (which it normally is) 536870911 of them take about 4GiB which if you are using a 32 bit operating system is likely to be the limit of the virtual memory space for your program.
If that is the case using an array wont help because the limit is the process memory space not what an individual array or vector can hold.
If you are using that much data use a file or a memory mapped file.
Hi,
I need to create the double vector of
500*32*16*<un_knownsize>..The final element i will push in runtime...
Hi Banfa,
1) If i increase the virtual memory whether I can push much element.
2) More over I am not aware of memory mapped file. Please could you give some example to achieve.
Are you able to use database engine like Oracle? I'm starting to think that you will need to desing a database engine that more in it than one array.
I can use database engine but it will take more time to insert data and retrieve back again.But my requirement is around I will take around 300MB of data in buffer then I will process and convert into double values, this size will be around 3.5GB, then I have to write into file. But if i use database everytime i have to insert then at the end i have to read back from database then I have to write into the file.
Except Oracle doesn't work that way - and it can handle terabyte tables.
As can, I believe Microsoft SQL Server.
I am starting to worry that you will send a ton of time writing a database handler rather than processing your doubles.
I have personally written a semented database from scratch to handle table than spanned six hard discs. It worked but I spent a lot of time getting it to work.
BTW: During this talk of a 4D array I need you to be clear that there are only 1D arrays in both C and C++. The "dimensions" are just a way of getting the compiler to do your pointer arithmetic.
Banfa 9,065
Expert Mod 8TB
BTW: During this talk of a 4D array I need you to be clear that there are only 1D arrays in both C and C++. The "dimensions" are just a way of getting the compiler to do your pointer arithmetic.
I have to admit to be slightly surprised that you didn't say this in your first post.
Are you sure a database is required? It sounds like the process is - Read Data
- Process Data
- Write File
If (BIG IF) the data can be processed linearly and written straight to file then you don't actually need to do any more than store the original 300MByte of data which isn't so bad.
Are you sure a database is required:
You may be right, but hard to say since the number of doubles required is unknown. That raises the possibility the data will exceed the maximum file size and that would result in a segmented data structure. The result is spending time on things other than processing the data.
Try This Code -
-
#include <stdio.h>
-
int main()
-
{
-
-
int i, j, k, l, size;
-
-
int a[2][2][2][2];
-
-
size = 2;
-
-
a[0][0][0][0] = 5;
-
a[0][0][0][1] = 3;
-
a[0][0][1][0] = 5;
-
a[0][0][1][1] = 3;
-
a[0][1][0][0] = 6;
-
a[0][1][0][1] = 7;
-
a[0][1][1][0] = 6;
-
a[0][1][1][1] = 7;
-
a[1][0][0][0] = 8;
-
a[1][0][0][1] = 9;
-
a[1][0][1][0] = 8;
-
a[1][0][1][1] = 9;
-
a[1][1][0][0] = 9;
-
a[1][1][0][1] = 7;
-
a[1][1][1][0] = 9;
-
a[1][1][1][1] = 7;
-
-
for (i = 0; i < size; i++) {
-
for (j = 0; j < size; j++) {
-
for (k = 0; k < size; k++) {
-
for (l = 0; l < size; l++) {
-
printf("Value of a[%d][%d][%d][%d] :- %d ",
-
i, j, k, l, a[i][j][k][l]);
-
printf("\n");
-
}
-
}
-
}
-
}
-
return 0;
-
}
Banfa 9,065
Expert Mod 8TB
@Sherin, I don't think you have appreciated the problem that this thread is about which is not having a 4 dimensional array but rather processing and extremely large volume of data. Because of that the rather trivial example code posted is of little to no use in solving the actual problem.
@weaknessforcats
"there are only 1D arrays in both C and C++. The "dimensions" are just a way of getting the compiler to do your pointer arithmetic."
That is interesting. Instead of looking at it as multi-dimensional, maybe I should look at it as single dimensional with a potential for dimensions inside of that first dimension like a tree. Start with one dimension and get that to work, then work on one more dimension at a time getting each of those to work before going on to more of the tree.
Thanks: This is an easier way to look at it (for me).
I am working on (still at it) a shortest path algorithm (with variably changing constraints) that can handle a 1,000+ by 1,000+ grid. I was considering multi-dimensional vectors and trying to get that to work. I think that I shall go back to designing based upon a single dimension vector now; and potentially use other dimensions for variable conditions within that first dimension's elements.
Maybe as a single dimension like this?
vector <std::wstring> Grid_X_Y (1000000);
to start. Then I can concatenate the elements as data is added or changed.
Or maybe with multiple dimensions like this?
vector <std::wstring> Grid_X_Y (1000000,10);
to start. Then I can add up to 10 values per each of the first dimension's elements. This might make it easier to choose a value later rather than having to parse each wstring as in the single dimension example.
Maybe this process might help the OP.
Thank you weaknessforcats .
Thank you bytes.com .
Glory to God .
Post your reply Sign in to post your reply or Sign up for a free account.
Similar topics
2 posts
views
Thread by Nick |
last post: by
|
22 posts
views
Thread by nobody |
last post: by
|
7 posts
views
Thread by Piotre Ugrumov |
last post: by
|
6 posts
views
Thread by billy |
last post: by
|
1 post
views
Thread by SAN CAZIANO |
last post: by
|
3 posts
views
Thread by jhs |
last post: by
|
2 posts
views
Thread by Wendell Wilkie |
last post: by
|
2 posts
views
Thread by Big Charles |
last post: by
|
11 posts
views
Thread by memeticvirus |
last post: by
|
3 posts
views
Thread by David K in San Jose |
last post: by
| | | | | | | | | | |