|
A First Look at .NET At first glance the .NET framework appears as a new departure for Microsoft - moving from compilation to native code, to compilation to tokenised code. However, much of what you see in the .NET framework has its roots in the technologies that have come out from Redmond over the last twenty years or so. The .NET framework is a culmination of those technologies in a bold attempt to try to unify Windows development. Much of the .NET base classes owe more than a passing resemblance to the Win32 API. Very little of the base class library seems to look like anything that could be described an existing cross-platform standard. .NET applications are compiled to an intermediate language - opcodes that mean absolutely nothing to any existing processor, and intentionally so. To facilitate application development Microsoft has provided the .NET base class library, which I regard as being in two parts: the core base class library, and 'the rest'. The core base class library is the part of the library that will be supported on other (non-Microsoft) platforms, should such a port occur, whereas 'the rest' represents the application details that make our applications really interesting - APIs like directory access, message queuing, Windowing and access to the .NET component services (formerly, and significantly, know as COM+ component services). The core library allows you to create objects and do mundane, but useful things, like accessing files and writing socket-based code, but it has yet to become clear whether technologies like ADO.NET, the XML classes and message queuing classes will be available on platforms other than Windows. Without such support it is debatable whether a .NET port is likely to gain much popularity. .NET applications - where your code with references to the base class library, and other libraries - are loaded at runtime into the .NET execution engine, and most applications will be just-in-time (JIT) compiled by the .NET JIT compiler. This compiles the platform independent intermediate language opcodes to native code used by the processor where the JIT compiler is running. Using intermediate code and JIT compilation means that Microsoft have built into the framework the facility to allow .NET applications to be runnable on any operating system that supports the .NET runtime. Indeed, the remoting architecture of .NET assumes that by default objects are passed by value, that is, when an object is passed to another machine, the actual data in the object is passed and a copy of the object is initialised on the remote machine and run there. If the assembly - the package that contains the object's code - is not present on the remote machine, it will be downloaded by that machine. This passing around of assemblies can only work if the code in each assembly is platform independent. While it is too early to contemplate whether all operating systems will support the .NET runtime, it is clear that the majority of the world's desktop computers, and a significant proportion of the world's mobile devices, will support the .NET runtime. The reason is that these computers and devices run Windows of one flavour or another. Microsoft is committed to providing .NET for all of its 32-bit operating systems as well as its 64-bit operating systems of the future. When Microsoft says that .NET will run on 'other platforms' I take this to mean 'other Windows Platforms'. Think of the pain that was experienced going from 16-bit Windows to 32-bit Windows, think of the pain developing CE Win32 applications when you are used to developing NT Win32 applications, and you'll see how .NET will make Windows development far easier in the future. However, one has to be careful about reading too much into that commitment. Just because there will be .NET for Windows CE devices does not mean that an application conceived and compiled on Windows ME will run correctly on a mobile device. A developer still has to pay attention to facilities of the target device: for example, the limited screen real estate on Windows CE devices makes MDI (multiple document interface) applications unusable, but the .NET namespace for Windowing, System.WinForms, makes developing such applications simple. The solution to this problem is componentization: separating business logic into objects separate to the UI 'presentation' code. Much of the current VB code pays scant regard to such an n-tier approach even though it has been in vogue for many years. .NET will make this style of development a requirement simply because it will be too difficult (and in most cases impossible) to maintain a cross platform code base without it. So if the .NET framework does not give us the holy grail of write-once-run-anywhere, what does it give us? Lots:
These facilities represent what could be described as the embodiment of "joined-up thinking" in Microsoft, where finally, all teams are pulling in the same direction. Previously, innovations in Microsoft appeared in separate groups, and slowly diffused throughout the company, rather than as part of an overall strategy. Such a mechanism was rather destabilizing to the outsider because it was never clear whether such innovations would benefit all, or just a small group of Microsoft users. In other situations such tensions between groups in Microsoft restricted innovation. A classic case is COM. COM actually comes in two flavors: interface-based RPC and the 'object-based' automation. RPC-based COM was very much the preserve of C++ and NT developers, automation based COM was the preserve of Visual Basic. When the two technologies were merged, with the significant merging of the MIDL and MkTypeLib tools, the problem became worse because developers of 'COM' found they had to use two different APIs designed and controlled by two different groups with two different aims. VB programmers were told that COM was now 'interface-based' and that this was the better way to access objects, as indeed it was. The problem was that they were still allowed to access objects in the old VB way, most VB developers never bothered to use interfaces and consequently many of the innovations in MTS and COM+ were either lost to them or misunderstood. On the other hand, C++ developers were told to use type information, because it simplified marshalling and deployment, but when it became clear to the developer who was used to MIDL that they had lost most of the useful marshalling facilities, they ignored type libraries and stayed with the RPC-based interfaces. The unification of the two types of COM merely polarised developers further, and resulted in the current, divided, COM community. In this one area, .NET has solved the problem by redesigning the object model from the bottom up, building in both interface programming and object-based access into the model in a way that is neither confusing nor restrictive. Furthermore, they have built-in interoperation that allows .NET code to use existing COM objects and to allow .NET objects to be accessed as if they are COM objects (again, accessible in the C++, interface-based manner, or in the VB-like, object based manner). If you like, the object model has successfully been designed as a one-size-fits-all. This ethos pervades the .NET base class library, both in the core part and in the rest of the library. Many of the areas that traditionally have been difficult to program, have been simplified and this can only be for the better. So, does .NET have a future? Indeed it does, Microsoft are betting on it as their next big thing. Does it represent a positive move forward? Yes, because it makes accessing all the great facilities in Win32 that have accumulated since NT3.1 was released far easier, and as a result there will be a great influx of new applications. Will .NET be everywhere? I doubt it, the only situation I can envisage where this can occur is if every computer on the planet is a Windows machine, and I am sure anti-monopolists will intervene before that could ever happen. Contribute to IDR: To contribute an article to IDR, a click here.
|
|