Previously compilers compiled to a safe, generic machine code (typically Pentium 4). You got reduced run speeds off the code because it was generic, but at least it ran!
With .NET you now compiled into what is called MSIL, and intermediary language that looks ALMOST like machine code. Any language that uses .NET compiles to MSIL, and this is the core of the .NET framework. It enables two programmers that are using drastically different implementations of .NET (say, C# and Delphi) to use each others work easily.
This MSIL is what is shipped in the executable. Once ran on the client computer, the JIT (Just in Time) Compiler takes over. It compiles the executable into machine code SPECIFICALLY for that clients processor. Additionally, it only compiles what is being used, hence, you might never fully compile a program you are using. This is for performance sake.
By compiling directly to the machine code of the processor, you get an increased run speed. This compensates for the GC (Garbage Collection) of .NET and the compilation of the JIT compiler. Some argue its slower, some argue its faster, my money is on 'it depends'. It probably depends on the application you are running, and thats my stance.
The icing on the cake of .NET is the GC. No more stray pointers, no (?) more memory leaks. TECHNICALLY SPEAKING, of course. I have not gone to great lengths to prove this, but I have found that no program I have written has had a memory leak that I was aware of.
The GC works by using what is called the Managed Heap. Traditionally, you used the stack or Native Heap for memory. You grabbed memory sequentially as you used it, and dealt with direct memory locations. With GC, you are dealing with an layer of abstraction, and are never dealt the memory's exact location. GC shuffles the memory, cleaning unused pieces and optimizing for gaps in the memory which occur frequently on Native Apps. Imagine this scenario:
Native Program:
[Object Block = 4 bytes][Object Block = 4 bytes][OtherObject = 5 bytes][Object Block = 4 bytes]
Now, if it released the OtherObject, and an Object was placed in its location further in the course of the program, you would have 1 byte of unused space (and good luck filling it). The managed program on the other hand, has its objects shuffled to fill that gap as best as possible so that the memory can be fully utilized.
This is just the tip of the iceberg. I could ramble on forever about .NET if need be, but this should give you a good base line.