Not all of us get to use EF 4.0 like the cool kids. Sometimes, you are on a project with a state government – for example – who won’t use a Microsoft product before its first service pack, much less Beta 2. You are stuck with a product that everyone including Microsoft has agreed is a substandard effort – EF 3.5.
One of the big complaints about EF 3.5 is that the domain mode it provides is persistence aware. This means that rather than just asking the framework for a class, say People, and getting a Person object back, you get a framework object with a bunch of extra database code in it that breaks the tenants of encapsulation. At first blush, this seems like something that only people at MIT care about, but the fact is that when you change something about your database, you have to change all of these objects, which is a bad thing.
Anyway. I was tasked with finding a data layer technology for this big project. I looked at NHibernate, and SubSonic, and a few others. I looked at code generators like LLBLGen. I ended up with EF 3.5 because I know I am going to want to go to EF 4 and there will be less recoding if I start with EF 3.5. That’s basically it.
I still need persistence ignorant objects, though. Enter the POCO Adapter. This project is on code.msdn.com, and it written by people knowledgeable about how EF works. It has a code generator that takes the EDMS file and generates Plain Old CLR Classes, and a customizable transport layer. Looks like it will work.
Getting the POCO Adapter and getting started
The EF POCP Adapter is available from code.microsoft.com. It comes in both C# and VB, and has sample code and texts using Northwind. The code generator compiles into a console application, and the adapter itself is just a strongly typed class file.
I started things out by generating an EDMX of SHARP, my sample conference management system.
Then I compiled the EFPocoAdapter project and was rolling.
Generating the domain model
I started by running the commandline utility on my EDMX file for SHARP.
EFPocoClassGen /inedmx:"C:\Users\Bill\Documents\Visual Studio 2008\Projects\SharpEdm\SharpEdm\SHARP
I added the resultant sharp.cs file to my project, referenced the EFPocoAdapter project in the solution,. and set up a project reference to clear up references to EFPopoAdapter and EFPopoAdapter.Classes. Wow, it really does take a loong time for that dialog to come up. Anyway, then I compiled.
Ran into a weird naming problem. All of the generic PocoAdapterBase instances referenced the ConferenceDbModel, which is the namespace for the EDMX classes, and not SharpEdm, which was the actual class. After discovering this, I discovered for the 10,000th time that it is a good idea to read the documentation first. Doing that, I learned that there are some options I really should set.
- /ref sets the compiled DLL that has the classes that I want to map to. All I did was compile the EDMX file and use that DLL.
- /map helped with my naming issue.
So the final command looked like this:
/inedmx:"C:\Users\Bill\Documents\Visual Studio 2008\Projects\SharpEdm\SharpEdm\SHARP.edmx"
/ref:"C:\Users\Bill\Documents\Visual Studio 2008\Projects\SharpEdm\SharpEdm\bin\Debug\SharpEdm.dll"
The tools it left me are pretty straightforward. here is a look at them:
In the next post I’ll take a look at using them in this context.
I am working on the table of contents for my new book on designing software with SQL Server Modeling, or TheModelingSystemFormerlyKnownAsOslo. There is a lot to be shared, but TOCs are kinda boring. I thought I would use a post to fill out any random thoughts.
I think I am going to break the book into three parts. First, some Microsoft-centric bits about modeling in general. Second, a detailed look at M. Finally, we are gonna build an app. I want to make this thing functional. If I can't build real, usable software with it and make my life easier, then I don't want it, and I don't want you to use it. Proves how much I think of the SQL Modeling team if I am writing a book on it, huh?
Anyway. The tl;dr on M is that is eats examples and poops your database AND your POCOs. If they eventually get to the point where complex examples are handles gracefully, then we might have something here.
I am still not sold on Quadrant. They need to stop showing SQL Server table level examples. That's. Not. The. Point. If I can't gain visibility into my AD and my file structure and my partner's accounting system, I'll just use Crystal Reports or Query Analyzer.
Most people don't understand that we have been given unprecedented access into a technology that probably won't be ready for prime time for a couple of years. M is rough. Quadrant is rough. There is a lot of SCOPE work to do, not to mention a little coding. This concept is just starting to find itself. There are technologies that SSM depends on that haven't been built yet. Have a little patience, people.
Now, this Linq to M idea is a very interesting look into the thinking of the SSM team. We describe something in M. We upload to the repository. We can 'query' the example in C# with Linq. Great. Are we ever going to use M to move REAL data, rather than example data? No? Then how is this different from Linq to SQL? It isn't? Oh, OK.
Generally, though, this is something I have been saying since my "Web Design With The End in Mind" article back in 1999. The data is the application. If you understand the behavior of the data - which you somehow must store and business rules (or metadata) then your application is done. Skin it and go get a beer.
Wow, Shawn Wildermuth's article on Textual DSLs is really good. Why haven't I seen this? I thought he was just a UI geek. Color me impressed.
Sorry this is so stream-of-consciousness. It is late and this cough is keeping me from sleeping.
I think that SQL Server Modeling has the potential to dramatically change how Microsoft architects design software for clients. I think it has the potential to improve design too, by making good, principled design part of the fabric rather than a documented rule. I really, really wish they hadn't pinned it to SQL Server, but I guess I get it. The data is the application. Tha'ts why I have called it the Data Driven Web for fifteen years.
A recent reader emailed to wax poignant about VB6, and ask about dynamic typing – among other things – in VB.NET. although I feel that static typing has a more firmly established place than ever in software development, I commiserated with him in this response:
A lot of people agree with you. There is even a well-established petition at http://classicvb.org/ that states these and a lot of other problems with VB.NET, and the .NET Framework in general.
However, you have to remember that VB.NET is a language, and VB6 is a program, language and framework all in one. As I pointed out in Chapter 1, they are designed to do different things. A lot of the things that you could do in VB6 are considered bad practice now, due to scalability and other issues, although many are coming back into vogue.
For instance, you point out the Variant issue. What you are saying here is that VB6 is dynamically typed and VB.NET is strongly typed. Coding for a strongly typed system is a lot slower than for a dynamically typed system. You will, however, have a lot fewer errors with a statically typed system because any typing issues will usually be caught at compile time.
I have always found it interesting that many community members belittle VB6 for being a 'hack' out of one side of their mouth, and then extol the speed and ease of Ruby out of the other side of their mouth, never realizing that the dynamic typing system is a common defining characteristic of both 'languages'.
I should add that dynamic typing is coming back in VB10 (as it is in C# 4.0) with the dynamic language runtime. Hopefully that will return some of the flexibility to the language that you are looking for.