473,883 Members | 2,102 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

What would be a faster alternative to boxing/unboxing?

I have a situation where an app writes data of various types (primitives and
objects) into a single dimensional array of objects. (This array eventually
becomes a row in a data table, but that's another story.) The data is
written once and then read many times. Each primitive read requires
unboxing. The data reads are critical to overall app performance. In the
hopes of improving performance, we have tried to find a way to avoid the
unboxing and casting. So far, nothing has worked well.

One solution we tried was to create our own simple object to wrap the
primitives. In this object we used a public field that allowed reading of
the primitive data value without a property or unboxing. However, overall
performance went down. We are still investigating, but our current
speculations are that:
1. the overhead of creating the object was significantly greater than the
overhead of boxing and this offset any gains we might have gotten on the
reads (a guess).
2. wrapping each primitive forced us to introduce a new virtual method call
in place of the boxing/casting. Maybe the overhead of this call is what
caused the performance to go down. (Another guess.)

BTW, performance went down 15%.

Does anyone have any ideas on how to implement the following:

A 1-dimension array (with normal indexing) containing heterogenous strongly
typed data that allows reading/writing without casting and without
boxing/unboxing.

All thoughts/suggestions are appreciated.


Nov 15 '05
43 6924
That's not hard. Use an array of unsigned ints; upper 3 can be type
identifier; remaining bits index into array of a specific type. For example,
wrap it in a struct to make it easier:

enum ArrayType
{
Int = 0x1,
Double = 0x2,...
}

struct ArrayEntry
{
const unsigned int typeMask = 0x7 << 29;
const unsigned int indexMask = ~ typeMask;

usigned int data;

public ArrayEntry(Arra yType type, int index)
{
data= 0; // for compiler warning
Type = type;
Index = index;
}

ArrayType Type
{
get { return (ArrayType)(dat a>> 29); }
set { value = Index | ((int)data) << 29); }
}

int Index
{
get { return data & indexMask; }
set { data = Type | (value & indexMask); }
}
}

Then if you have ArrayEntry[] a, you do a[i].Index, a[i].Type, then use that
info to index into the type-specific arrays, which I would call pools, if
you choose to use them that way.

Regards,
Frank Hileman
Prodige Software Corporation

check out VG.net: www.prodigesoftware.com/screen5.png
An animated vector graphics system integrated in VS.net
beta requests: vgdotnetbeta at prodigesoftware .com

"100" <10*@100.com> wrote in message
news:eQ******** ******@TK2MSFTN GP10.phx.gbl...
The real problem with this idea is - how we can possibly know which type is the object at given index in order to know what overload to use to retrieve the data. We can provide method that returns the type of an object by index value. But then we need to check this type and call the appropriate method
which will introduce more overhead when reading data. It may turns that
boxing/unboxing is the best solution and the problem with the prformance is not there.

That's why I think we can't come up with good idea unless Mountain gives us an example how he is planning to use this storage. He may not need to have
the objects ordered and then this solution will work and we won't have the
problem of wasting the memory.

B\rgds
100

"100" <10*@100.com> wrote in message
news:eb******** ********@TK2MSF TNGP09.phx.gbl. ..
Hi Paul,
You are hundred percent right. That was my fault. I wrote this minutes
before I went home yesterday and its my fault that I posted this without any
revision.

Thanks for bringing this out. We can't use indexer for that. What we can

do
is to use separate operations SetData and GetData with as many overloads

as
different types we want to accept. GetData cannot return value because ot will introduce the same problem that we have already with the indexer.

So we
can have prototype like

void GetData(int index, out <type> value)
{
}

this should fix the problem.

B\rgds
100

"Paul Robson" <au******@autis muk.muralichuck s.freeserve.co. uk> wrote in
message news:bp******** **@news8.svr.po l.co.uk...
100 wrote:

> class Storage
> {
> int[] intArr= new int[10];
> double[] dblArr = new int[10];
> object[] refArr = new object[10];
>
> public int this[int i]
> {
> get{return intArr[i];}
> set{intArr[i] = value;}
> }
>
> public double this[int i]

I thought that C# couldn't do polymorphism on return types only ? Am I
wrong ?



Nov 15 '05 #21
Frank,
That's awesome! Beautiful! Just what I was looking for, and the style is
consistent with other code I've written, so it's a beautiful fit! Thanks.

I think I'll use 6 bits for the types (I probably have about 15 different
types I use already). My (upper) array length will not need to be more than
1 million, so this leaves me plenty of room. I might even allocate another
bit for the types.

The obvious way to do the last step you mention, indexing into the
type-specific sub arrays (pools), is to select the type specific array via a
switch or if/else block. Do you have a better suggestion? This step could
end up negating some of the potential performance gains if it isn't done
correctly. Another indexing/dereferencing operation at this step would be
great. Maybe I can put all the type-specific sub arrays in an object[] and
cast from an object to a type[] (ie, double[], bool[], int[] etc.).

object[] poolArray = new object[numberOfTypes]; //array of type specific
arrays

//this statement would exist inside an overloaded "Get" method that requests
a double:
return ((double[])poolArray[typeNdx])[itemNdx];

I have the above code because I was putting type specific arrays of length 1
inside each slot in the upper array as a way to avoid boxing and unboxing.
(In that case the 2nd index was always 0. This is the experimental solution
I previously mentioned that I would post. It's fairly simple to implement,
so I'll just skip a full post.)

Also, I will need to come up with a nice way to properly size the type
specific arrays (pools) before processing starts. My current design should
support this, with a little work. I already know the total size (ie, total
number of items to go into the array) in advance. However, I do not
currently know the size (count) needed on a per-type basis, so I'll have to
figure a way to compute this. This step isn't really as peformance critical
because it only needs to be done once.

Regards,
Mountain

P.S. Your code is a perfect example of why I love C# so much.
"Frank Hileman" <fr******@no.sp amming.prodiges oftware.com> wrote in message
news:eo******** ******@tk2msftn gp13.phx.gbl...
That's not hard. Use an array of unsigned ints; upper 3 can be type
identifier; remaining bits index into array of a specific type. For example, wrap it in a struct to make it easier:

enum ArrayType
{
Int = 0x1,
Double = 0x2,...
}

struct ArrayEntry
{
const unsigned int typeMask = 0x7 << 29;
const unsigned int indexMask = ~ typeMask;

usigned int data;

public ArrayEntry(Arra yType type, int index)
{
data= 0; // for compiler warning
Type = type;
Index = index;
}

ArrayType Type
{
get { return (ArrayType)(dat a>> 29); }
set { value = Index | ((int)data) << 29); }
}

int Index
{
get { return data & indexMask; }
set { data = Type | (value & indexMask); }
}
}

Then if you have ArrayEntry[] a, you do a[i].Index, a[i].Type, then use that info to index into the type-specific arrays, which I would call pools, if
you choose to use them that way.

Regards,
Frank Hileman
Prodige Software Corporation

check out VG.net: www.prodigesoftware.com/screen5.png
An animated vector graphics system integrated in VS.net
beta requests: vgdotnetbeta at prodigesoftware .com

"100" <10*@100.com> wrote in message
news:eQ******** ******@TK2MSFTN GP10.phx.gbl...
The real problem with this idea is - how we can possibly know which type

is
the object at given index in order to know what overload to use to

retrieve
the data. We can provide method that returns the type of an object by

index
value. But then we need to check this type and call the appropriate method which will introduce more overhead when reading data. It may turns that
boxing/unboxing is the best solution and the problem with the prformance

is
not there.

That's why I think we can't come up with good idea unless Mountain gives

us
an example how he is planning to use this storage. He may not need to have the objects ordered and then this solution will work and we won't have the problem of wasting the memory.

B\rgds
100

Nov 15 '05 #22
> I think I'll use 6 bits for the types (I probably have about 15 different
types I use already). My (upper) array length will not need to be more than 1 million, so this leaves me plenty of room. I might even allocate another
bit for the types.
With 5 bits you have 16 different enum values that can be put in (0-15).
The obvious way to do the last step you mention, indexing into the
type-specific sub arrays (pools), is to select the type specific array via a switch or if/else block. Do you have a better suggestion? This step could
end up negating some of the potential performance gains if it isn't done
correctly. Another indexing/dereferencing operation at this step would be
great. Maybe I can put all the type-specific sub arrays in an object[] and
cast from an object to a type[] (ie, double[], bool[], int[] etc.).

object[] poolArray = new object[numberOfTypes]; //array of type specific
arrays

//this statement would exist inside an overloaded "Get" method that requests a double:
return ((double[])poolArray[typeNdx])[itemNdx];
That would work, but I think the switch would be faster. The one thing I
worry about with your method is possible array type conversion. I don't
think it would happen but the compiler might put some dynamic type checking
in there that is really unnecessary (to make sure the cast is fair). So I
think a switch is most likely faster (small switches optimize down to tight
machine code), since no dynamic cast is needed. Also no array bounds
checking is needed:

switch (typeNdx)
{
ArrayType.Int:
return intArray[itemNdx];
....
}

In C of course, the array indexing would not cause typechecking or array
bounds checking. But even there the switch would probably win because it
does not require an extra memory dereference (every array index operation is
an extra dereference).
Also, I will need to come up with a nice way to properly size the type
specific arrays (pools) before processing starts. My current design should
support this, with a little work. I already know the total size (ie, total
number of items to go into the array) in advance. However, I do not
currently know the size (count) needed on a per-type basis, so I'll have to figure a way to compute this. This step isn't really as peformance critical because it only needs to be done once.
I would not precompute the sizes at all. Instead, allocate the typespecific
arrays on the fly, in the same way an arraylist works. For example, here is
a types-pecific arraylist sort of collection. It is a bit of code, I will
put it in the next message.
P.S. Your code is a perfect example of why I love C# so much.


Thanks! Structs in C# wrap bitfields beautifully. I think perhaps Brad Adams
and others should not discourage mutable structs so much, because they are
very useful when used as fields in objects. And the compiler can catch the
use of a temporary struct as an lvalue.

I thought of a way to clean up the "magic numbers" a bit (0x7 and 29 in my
sample). In the ArrayType enum:

[Flags]
enum ArrayType : unsigned int
{
Int = 0x1,
Double = Int << 1,
...
All = Int | Double | ...
}

That would give you an unshifted bit mask in the enum, called All. Then, if
you write a function which can count the bits in an unsigned int, you could
compute the constants this way:

static readonly insigned int typeShift = 32 -
BitFunctions.Co untBits(ArrayTy pe.All);
static readonly unsigned int typeMask = ArrayType.All << typeShift;
static readonly unsigned int indexMask = ~typeMask;

and in the property:

ArrayType Type
{
get { return (ArrayType)(dat a >> typeShift); }
set { data = Index | ((unsigned int)value) << typeShift); }
}

Note the change to the set function! I made a mistake before, and swapped
the words "value" and "data". I just wrote the code out without an editor
or compiler.

- Frank


Nov 15 '05 #23
> With 5 bits you have 16 different enum values that can be put in (0-15).

Whoops! I meant, with 4 bits you have 16 possible enum values. And the way I
assigned the values was wrong, they are not bitflags:

enum ArrayType : unsigned int
{
Int = 1,
Double = 2,
String = 3,
...
Max= what ever the biggest one is...
}

Then in the bit shifting computation, you would use Max instead, but if it
is not all 1 bits, you have to count the position of the topmost set bit,
and not the number of bits set.

Sorry about that. Should always run these things through a compiler...

Nov 15 '05 #24
Here is the basic code for an arraylist style collection. I took out some of
the ICollection stuff, enumerator, etc. It is called PointList, but really
it is an array of Vector structs (x and y values). So you can see how it can
be used for any data type. Just swap your type name for "Vector".

public class PointList
{
// -------- static members --------
private const int DefaultCapacity = 4;

// -------- instance members --------
private Vector[] items;
private int count;

publicPointList ()
{
items = new Vector[DefaultCapacity];
}

public int Count
{
get { return count; }
}

public int Capacity
{
get { return items.Length; }
set
{
if (items.Length == value)
return;
Check.Argument( value >= count, "value",
"Capacity must be greater or equal to Count");
int newCapacity = value > DefaultCapacity ? value : DefaultCapacity ;
Vector[] newItems = new Vector[newCapacity];
if (count > 0)
Array.Copy(item s, 0, newItems, 0, count);
items = newItems;
}
}

public Vector this[int index]
{
get { return items[index]; }
set
{
if (index >= count)
throw new ArgumentOutOfRa ngeException("i ndex", index,
"Index must be smaller than Count.");
if (items[index] == value)
return;
items[index] = value;
}
}

public int Add(Vector point)
{
if (items.Length == count)
Capacity = items.Length * 2;
int index = count;
items[index] = point;
++count;
return index;
}

public void AddRange(Vector[] points)
{
int minCapacity = count + points.Length;
if (minCapacity > items.Length)
Capacity = minCapacity > DefaultCapacity ? minCapacity :
DefaultCapacity ;
Array.Copy(poin ts, 0, items, count, points.Length);
count += points.Length;
}

public virtual void Clear()
{
Array.Clear(ite ms, 0, count);
count = 0;
}
}
Nov 15 '05 #25
Frank,
Thanks for your additional comments. My responses are inline.
Regards,
Mountain

That would work, but I think the switch would be faster. The one thing I
worry about with your method is possible array type conversion. I don't
think it would happen but the compiler might put some dynamic type checking in there that is really unnecessary (to make sure the cast is fair). So I
think a switch is most likely faster (small switches optimize down to tight machine code), since no dynamic cast is needed. Also no array bounds
checking is needed:
I'll take your advice on this. My only experience in this regard is a data
structure I wrote in C++ that used dereferencing extensively and in place of
conditionals whenever possible. It was super fast, out performing the
optimum data structure in this category. (Just so this doesn't sound too
vague, I'll add that it was a binary tree that outperformed an identically
implemented height balanced AVL tree.) This experience has made me inclined
to favor dereferencing whenever possible, but obviously that was a limited
situation.

I would not precompute the sizes at all. Instead, allocate the typespecific arrays on the fly, in the same way an arraylist works. For example, here is a types-pecific arraylist sort of collection. It is a bit of code, I will
put it in the next message.
Thanks.
I will use a "type-specific arraylist sort of collection" if I can implement
it such that when I clear my arrays and prepare for new data I don't have to
reallocate anything. That way the cost would be incurred only once. I'm sure
this won't be a problem. All subsequent writes (thousands of them) of new
data (after the first full population and clear) will be identically sized
and identically indexed.

Note the change to the set function! I made a mistake before, and swapped
the words "value" and "data". I just wrote the code out without an editor
or compiler.


I didn't see that one, but I did see this:
"usigned int data;"

That clued me in that you wrote it without an editor/compiler -- I was
impressed. If I don't check my code samples by compiling, I typically end up
with something nonsensical in there somewhere.
Nov 15 '05 #26
I had to work up my own example since you didn't post any code (Hey, I was
bored today) so maybe my assumptions are really different from yours, but I
found one way of doing it that was 28% faster than the object array in pure
C# and 41% faster with a few easy IL modifications.

Rather than create a specific container *class* create a container *union
struct* with all value types residing in the same space on your struct. Now
I only tried with with one reference type and two value types, but it did
work a lot faster (a plain struct was slower, for some reason creating a
union via the StructLayout attribute was faster).

My 4 tests are downloadable from here:
http://chadich.mysite4now.com/Fastes...enousArray.zip

There's a PDF/Word doc that shows a snapshot of the IL I took out of EACH of
the sturct constructors (should be obvious which code is redundant). But
post again if you are unsure how to use ILDasm and ILAsm to de/re-compile
it.

Richard

--
Veuillez m'excuser, mon Français est très pauvre. Cependant, si vous voyez
mauvais C #, c'est mon défaut!
"Mountain Bikn' Guy" <vc@attbi.com > wrote in message
news:_W6vb.1969 14$9E1.1057145@ attbi_s52...
I have a situation where an app writes data of various types (primitives and objects) into a single dimensional array of objects. (This array eventually becomes a row in a data table, but that's another story.) The data is
written once and then read many times. Each primitive read requires
unboxing. The data reads are critical to overall app performance. In the
hopes of improving performance, we have tried to find a way to avoid the
unboxing and casting. So far, nothing has worked well.

One solution we tried was to create our own simple object to wrap the
primitives. In this object we used a public field that allowed reading of
the primitive data value without a property or unboxing. However, overall
performance went down. We are still investigating, but our current
speculations are that:
1. the overhead of creating the object was significantly greater than the
overhead of boxing and this offset any gains we might have gotten on the
reads (a guess).
2. wrapping each primitive forced us to introduce a new virtual method call in place of the boxing/casting. Maybe the overhead of this call is what
caused the performance to go down. (Another guess.)

BTW, performance went down 15%.

Does anyone have any ideas on how to implement the following:

A 1-dimension array (with normal indexing) containing heterogenous strongly typed data that allows reading/writing without casting and without
boxing/unboxing.

All thoughts/suggestions are appreciated.

Nov 15 '05 #27
> I'll take your advice on this. My only experience in this regard is a data
structure I wrote in C++ that used dereferencing extensively and in place of conditionals whenever possible. It was super fast, out performing the
optimum data structure in this category. (Just so this doesn't sound too
vague, I'll add that it was a binary tree that outperformed an identically
implemented height balanced AVL tree.) This experience has made me inclined to favor dereferencing whenever possible, but obviously that was a limited
situation.


Yep, sounds like you had a killer implementation. It all depends on the
assembly generated. When you start poking at the assembly level you can see
that every pointer deref has a cost that can be avoided. We are probably
making a lot of assumptions about the generated code from a high level
language. I would be curious to see if there was any speed diff between the
switch and the generic array method. The other thing mentioned, a union,
sounds interesting too, but that requires special permissions at run-time,
because that technique theoretically has security implications. I miss the
union from C. Union would have been my first thought in C++ for this
problem. Seems like they might have put union somehow in C# without this
security problem -- perhaps by forcing the CLR to clear the bits before you
cast it differently.

have fun - Frank
Nov 15 '05 #28
100
Hi Frank,
Yes, you can do that. Haw I said "We can provide method that returns the
type of an object by index
value. But then we need to check this type and...". As far as I remember
Mountain was concerned about reading performance. My question now is: what
is your suggestion about the storage-class (as a whole) interface.
So if you have
class MyStorage
{
.....
}

What interface you suggest to retrieve the data. If you have one method
XXXX GetData(int index)
What is the type of XXXX if it is an 'obect' we have to unbox the object
(Montian wanted to avoid exactly this).
If your idea is not to use one class, but insted bunch of different arrays
for each type + one for ArrayElements
We'll have something like this
ArrayElements e = arrElements[index];
switch(e.Type)
{
case Integer:
int i = intArr[e.Index];
/// do sth with int
break;
case Double:
double d = dblArr[e.Index];
/// do sth with double
break;
......

}
Ok, we avoided boxing/unboxing. But how many copy operations we do have?

1 for returning the ArrayElement value + 1 for extracting the index + 1 for
extracting the type + 1 for extracting the concrete value from the
type-cpecific array. At least 4 copy operations + supporting opeations as
shift and logical ops + 2 array indexing(all bound checks, etc). Mountain
finds unboxing(one copy + one typecheck) for expensive. Do you thing it is
more cheap?

B\rgds
100

"Frank Hileman" <fr******@no.sp amming.prodiges oftware.com> wrote in message
news:eo******** ******@tk2msftn gp13.phx.gbl...
That's not hard. Use an array of unsigned ints; upper 3 can be type
identifier; remaining bits index into array of a specific type. For example, wrap it in a struct to make it easier:

enum ArrayType
{
Int = 0x1,
Double = 0x2,...
}

struct ArrayEntry
{
const unsigned int typeMask = 0x7 << 29;
const unsigned int indexMask = ~ typeMask;

usigned int data;

public ArrayEntry(Arra yType type, int index)
{
data= 0; // for compiler warning
Type = type;
Index = index;
}

ArrayType Type
{
get { return (ArrayType)(dat a>> 29); }
set { value = Index | ((int)data) << 29); }
}

int Index
{
get { return data & indexMask; }
set { data = Type | (value & indexMask); }
}
}

Then if you have ArrayEntry[] a, you do a[i].Index, a[i].Type, then use that info to index into the type-specific arrays, which I would call pools, if
you choose to use them that way.

Regards,
Frank Hileman
Prodige Software Corporation

check out VG.net: www.prodigesoftware.com/screen5.png
An animated vector graphics system integrated in VS.net
beta requests: vgdotnetbeta at prodigesoftware .com

"100" <10*@100.com> wrote in message
news:eQ******** ******@TK2MSFTN GP10.phx.gbl...
The real problem with this idea is - how we can possibly know which type

is
the object at given index in order to know what overload to use to

retrieve
the data. We can provide method that returns the type of an object by

index
value. But then we need to check this type and call the appropriate method
which will introduce more overhead when reading data. It may turns that
boxing/unboxing is the best solution and the problem with the prformance

is
not there.

That's why I think we can't come up with good idea unless Mountain gives

us
an example how he is planning to use this storage. He may not need to have the objects ordered and then this solution will work and we won't have the problem of wasting the memory.

B\rgds
100

"100" <10*@100.com> wrote in message
news:eb******** ********@TK2MSF TNGP09.phx.gbl. ..
Hi Paul,
You are hundred percent right. That was my fault. I wrote this minutes
before I went home yesterday and its my fault that I posted this without
any
revision.

Thanks for bringing this out. We can't use indexer for that. What we
can do
is to use separate operations SetData and GetData with as many
overloads as
different types we want to accept. GetData cannot return value because

ot will introduce the same problem that we have already with the indexer.

So
we
can have prototype like

void GetData(int index, out <type> value)
{
}

this should fix the problem.

B\rgds
100

"Paul Robson" <au******@autis muk.muralichuck s.freeserve.co. uk> wrote

in message news:bp******** **@news8.svr.po l.co.uk...
> 100 wrote:
>
> > class Storage
> > {
> > int[] intArr= new int[10];
> > double[] dblArr = new int[10];
> > object[] refArr = new object[10];
> >
> > public int this[int i]
> > {
> > get{return intArr[i];}
> > set{intArr[i] = value;}
> > }
> >
> > public double this[int i]
>
> I thought that C# couldn't do polymorphism on return types only ? Am I > wrong ?
>



Nov 15 '05 #29
If you cannot use strong typing throughout, and avoid casting to object,
then you may as well stick to an ArrayList and boxing. The only interface
that can be used to efficiently retrieve the data is a strongly typed one,
GetInt, GetDouble, etc. Otherwise the whole concept is defeated.

Regarding the overhead from the copy, shift, logical ops, etc. These are all
very fast and inlined. This is the great thing about structs. Time it and
see. Make sure the class you use is not derived from MarshalByRefObj ect,
which defeats inlining. Based on my own perf tuning experience strongly
typed arrays should beat the boxing alternative hands down.

regards, Frank

"100" <10*@100.com> wrote in message
news:%2******** ********@TK2MSF TNGP10.phx.gbl. ..
Hi Frank,
Yes, you can do that. Haw I said "We can provide method that returns the
type of an object by index
value. But then we need to check this type and...". As far as I remember
Mountain was concerned about reading performance. My question now is: what
is your suggestion about the storage-class (as a whole) interface.
So if you have
class MyStorage
{
.....
}

What interface you suggest to retrieve the data. If you have one method
XXXX GetData(int index)
What is the type of XXXX if it is an 'obect' we have to unbox the object
(Montian wanted to avoid exactly this).
If your idea is not to use one class, but insted bunch of different arrays
for each type + one for ArrayElements
We'll have something like this
ArrayElements e = arrElements[index];
switch(e.Type)
{
case Integer:
int i = intArr[e.Index];
/// do sth with int
break;
case Double:
double d = dblArr[e.Index];
/// do sth with double
break;
......

}
Ok, we avoided boxing/unboxing. But how many copy operations we do have?

1 for returning the ArrayElement value + 1 for extracting the index + 1 for extracting the type + 1 for extracting the concrete value from the
type-cpecific array. At least 4 copy operations + supporting opeations as
shift and logical ops + 2 array indexing(all bound checks, etc). Mountain
finds unboxing(one copy + one typecheck) for expensive. Do you thing it is
more cheap?

B\rgds
100

Nov 15 '05 #30

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

24
2649
by: ALI-R | last post by:
Hi All, First of all I think this is gonna be one of those threads :-) since I have bunch of questions which make this very controversial:-0) Ok,Let's see: I was reading an article that When you pass a Value-Type to method call ,Boxing and Unboxing would happen,Consider the following snippet: int a=1355; myMethod(a); ......
94
5728
by: Peter Olcott | last post by:
How can I create an ArrayList in the older version of .NET that does not require the expensive Boxing and UnBoxing operations? In my case it will be an ArrayList of structures of ordinal types. Thanks.
161
7913
by: Peter Olcott | last post by:
According to Troelsen in "C# and the .NET Platform" "Boxing can be formally defined as the process of explicitly converting a value type into a corresponding reference type." I think that my biggest problem with this process is that the terms "value type" and "reference type" mean something entirely different than what they mean on every other platform in every other language. Normally a value type is the actual data itself stored in...
29
2359
by: calvert4rent | last post by:
I need to some sort of data type that will hold a listing of ID's and their counts/frequency. As I do some processing I get an ID back which I need to store and keep an accurate count for how many times that ID is listed. For example I might get a listing of these numbers: 1,4,6,23,6,2,54,1,6,23,2,4,1,6 I will get those numbers one at a time and need to build something like 1,3
0
10766
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven tapestry of website design and digital marketing. It's not merely about having a website; it's about crafting an immersive digital experience that captivates audiences and drives business growth. The Art of Business Website Design Your website is...
1
10863
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows Update option using the Control Panel or Settings app; it automatically checks for updates and installs any it finds, whether you like it or not. For most users, this new feature is actually very convenient. If you want to control the update process,...
1
7978
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome a new presenter, Adolph Dupré who will be discussing some powerful techniques for using class modules. He will explain when you may want to use classes instead of User Defined Types (UDT). For example, to manage the data in unbound forms. Adolph will...
0
7136
by: conductexam | last post by:
I have .net C# application in which I am extracting data from word file and save it in database particularly. To store word all data as it is I am converting the whole word file firstly in HTML and then checking html paragraph one by one. At the time of converting from word file to html my equations which are in the word document file was convert into image. Globals.ThisAddIn.Application.ActiveDocument.Select();...
0
5807
by: TSSRALBI | last post by:
Hello I'm a network technician in training and I need your help. I am currently learning how to create and manage the different types of VPNs and I have a question about LAN-to-LAN VPNs. The last exercise I practiced was to create a LAN-to-LAN VPN between two Pfsense firewalls, by using IPSEC protocols. I succeeded, with both firewalls in the same network. But I'm wondering if it's possible to do the same thing, with 2 Pfsense firewalls...
0
6007
by: adsilva | last post by:
A Windows Forms form does not have the event Unload, like VB6. What one acts like?
1
4622
by: 6302768590 | last post by:
Hai team i want code for transfer the data from one system to another through IP address by using C# our system has to for every 5mins then we have to update the data what the data is updated we have to send another system
2
4230
muto222
by: muto222 | last post by:
How can i add a mobile payment intergratation into php mysql website.
3
3241
bsmnconsultancy
by: bsmnconsultancy | last post by:
In today's digital era, a well-designed website is crucial for businesses looking to succeed. Whether you're a small business owner or a large corporation in Toronto, having a strong online presence can significantly impact your brand's success. BSMN Consultancy, a leader in Website Development in Toronto offers valuable insights into creating effective websites that not only look great but also perform exceptionally well. In this comprehensive...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.