Not all of us get to use EF 4.0 like the cool kids. Sometimes, you are on a project with a state government – for example – who won’t use a Microsoft product before its first service pack, much less Beta 2. You are stuck with a product that everyone including Microsoft has agreed is a substandard effort – EF 3.5.
One of the big complaints about EF 3.5 is that the domain mode it provides is persistence aware. This means that rather than just asking the framework for a class, say People, and getting a Person object back, you get a framework object with a bunch of extra database code in it that breaks the tenants of encapsulation. At first blush, this seems like something that only people at MIT care about, but the fact is that when you change something about your database, you have to change all of these objects, which is a bad thing.
Anyway. I was tasked with finding a data layer technology for this big project. I looked at NHibernate, and SubSonic, and a few others. I looked at code generators like LLBLGen. I ended up with EF 3.5 because I know I am going to want to go to EF 4 and there will be less recoding if I start with EF 3.5. That’s basically it.
I still need persistence ignorant objects, though. Enter the POCO Adapter. This project is on code.msdn.com, and it written by people knowledgeable about how EF works. It has a code generator that takes the EDMS file and generates Plain Old CLR Classes, and a customizable transport layer. Looks like it will work.
Getting the POCO Adapter and getting started
The EF POCP Adapter is available from code.microsoft.com. It comes in both C# and VB, and has sample code and texts using Northwind. The code generator compiles into a console application, and the adapter itself is just a strongly typed class file.
I started things out by generating an EDMX of SHARP, my sample conference management system.
Then I compiled the EFPocoAdapter project and was rolling.
Generating the domain model
I started by running the commandline utility on my EDMX file for SHARP.
EFPocoClassGen /inedmx:"C:\Users\Bill\Documents\Visual Studio 2008\Projects\SharpEdm\SharpEdm\SHARP
I added the resultant sharp.cs file to my project, referenced the EFPocoAdapter project in the solution,. and set up a project reference to clear up references to EFPopoAdapter and EFPopoAdapter.Classes. Wow, it really does take a loong time for that dialog to come up. Anyway, then I compiled.
Ran into a weird naming problem. All of the generic PocoAdapterBase instances referenced the ConferenceDbModel, which is the namespace for the EDMX classes, and not SharpEdm, which was the actual class. After discovering this, I discovered for the 10,000th time that it is a good idea to read the documentation first. Doing that, I learned that there are some options I really should set.
- /ref sets the compiled DLL that has the classes that I want to map to. All I did was compile the EDMX file and use that DLL.
- /map helped with my naming issue.
So the final command looked like this:
/inedmx:"C:\Users\Bill\Documents\Visual Studio 2008\Projects\SharpEdm\SharpEdm\SHARP.edmx"
/ref:"C:\Users\Bill\Documents\Visual Studio 2008\Projects\SharpEdm\SharpEdm\bin\Debug\SharpEdm.dll"
The tools it left me are pretty straightforward. here is a look at them:
In the next post I’ll take a look at using them in this context.
I am working on the table of contents for my new book on designing software with SQL Server Modeling, or TheModelingSystemFormerlyKnownAsOslo. There is a lot to be shared, but TOCs are kinda boring. I thought I would use a post to fill out any random thoughts.
I think I am going to break the book into three parts. First, some Microsoft-centric bits about modeling in general. Second, a detailed look at M. Finally, we are gonna build an app. I want to make this thing functional. If I can't build real, usable software with it and make my life easier, then I don't want it, and I don't want you to use it. Proves how much I think of the SQL Modeling team if I am writing a book on it, huh?
Anyway. The tl;dr on M is that is eats examples and poops your database AND your POCOs. If they eventually get to the point where complex examples are handles gracefully, then we might have something here.
I am still not sold on Quadrant. They need to stop showing SQL Server table level examples. That's. Not. The. Point. If I can't gain visibility into my AD and my file structure and my partner's accounting system, I'll just use Crystal Reports or Query Analyzer.
Most people don't understand that we have been given unprecedented access into a technology that probably won't be ready for prime time for a couple of years. M is rough. Quadrant is rough. There is a lot of SCOPE work to do, not to mention a little coding. This concept is just starting to find itself. There are technologies that SSM depends on that haven't been built yet. Have a little patience, people.
Now, this Linq to M idea is a very interesting look into the thinking of the SSM team. We describe something in M. We upload to the repository. We can 'query' the example in C# with Linq. Great. Are we ever going to use M to move REAL data, rather than example data? No? Then how is this different from Linq to SQL? It isn't? Oh, OK.
Generally, though, this is something I have been saying since my "Web Design With The End in Mind" article back in 1999. The data is the application. If you understand the behavior of the data - which you somehow must store and business rules (or metadata) then your application is done. Skin it and go get a beer.
Wow, Shawn Wildermuth's article on Textual DSLs is really good. Why haven't I seen this? I thought he was just a UI geek. Color me impressed.
Sorry this is so stream-of-consciousness. It is late and this cough is keeping me from sleeping.
I think that SQL Server Modeling has the potential to dramatically change how Microsoft architects design software for clients. I think it has the potential to improve design too, by making good, principled design part of the fabric rather than a documented rule. I really, really wish they hadn't pinned it to SQL Server, but I guess I get it. The data is the application. Tha'ts why I have called it the Data Driven Web for fifteen years.
A recent reader emailed to wax poignant about VB6, and ask about dynamic typing – among other things – in VB.NET. although I feel that static typing has a more firmly established place than ever in software development, I commiserated with him in this response:
A lot of people agree with you. There is even a well-established petition at http://classicvb.org/ that states these and a lot of other problems with VB.NET, and the .NET Framework in general.
However, you have to remember that VB.NET is a language, and VB6 is a program, language and framework all in one. As I pointed out in Chapter 1, they are designed to do different things. A lot of the things that you could do in VB6 are considered bad practice now, due to scalability and other issues, although many are coming back into vogue.
For instance, you point out the Variant issue. What you are saying here is that VB6 is dynamically typed and VB.NET is strongly typed. Coding for a strongly typed system is a lot slower than for a dynamically typed system. You will, however, have a lot fewer errors with a statically typed system because any typing issues will usually be caught at compile time.
I have always found it interesting that many community members belittle VB6 for being a 'hack' out of one side of their mouth, and then extol the speed and ease of Ruby out of the other side of their mouth, never realizing that the dynamic typing system is a common defining characteristic of both 'languages'.
I should add that dynamic typing is coming back in VB10 (as it is in C# 4.0) with the dynamic language runtime. Hopefully that will return some of the flexibility to the language that you are looking for.
My current client has blocked Twitter and Live Mesh at the firewall. At what point are organizations going to realize that social networking is beneficial to project progress? Now, I can no longer access my network of peers (well, I "can" but they are trying to prevent me), which has already provided me with many leads, links and ideas related to making this project better. Now I can no longer access my repository of project files, where I am getting all of my templates and reference documentation.
What is the point? Are they trying to prevent people from wasting time? How about blocking YouTube? How about uninstalling Solitare? How about not providing access to the external internet at all? There are a lot of clericals here, and many of them are temps, so why don't you just lock everything down? If that is too draconion, how about two firewall profiles, one for developers and another for clericals?
This fear of the Internet is remarkable in this day and age. Watching organizations (especially government organizations) try to bridge the gap of providing free access to information and keeping the temps from surfing porn is very frustrating for me.
Yesterday I went to Best Buy to get a Zune, and saved myself $200 and bought a Sony Walkman. Why? It does the same thing (plays music) and it has a Mini USB port rather than some useless fancy custom job that requires me to carry yet ANOTHER cable.
You see, I never wanted an MP3 player. I just want to use my HTC Touch Diamond. But using it as a media player is heavily balanced with battery life. In general, if I want to be able to make a call at 3PM, I’d better not listed to music at 10AM. Convergence will work when we have leeetle nuclear reactors for our cell phones. But that is a post for another day.
The one thing I wanted to be able to do with the Zune was subscribe to podcasts. My pain is that subscribing to podcasts minus a crappy cable interface is not worth $200 to me. “The Sony will do fine, and I will just figure out the podcast thing,” I thought.
Well, with a lot of help from Michael Young’s blog, which lead me to Jake Ludington’s blog, I have a working model that isn’t perfect but it seems to be working. This updates those two entries for Win 7, IE8 and the latest Windows Media Player – is it 12? I’m not sure. Anyway, here goes:
1) First step is to subscribe to the podcast with IE 8. Navigate to the website of a page with a feed you would like to subscribe to (like ExoticLiability.com) and click the View Feeds for this Page button in IE8.
2) Click on the “Subscribe to this feed” link on the RSS viewer page.
3) When you have subscribed to everything you are looking for, click on the Favorites button, and then the Feeds tab. I made a Podcasts folder there to keep them organized.
4) Right click on the feed and select properties. Check the Automatically Download Attached Files checkbox.
5) As it turns out, IE8 puts all the attachments from feeds in subfolders inside one temporary internet files folder. If you wait until IE gets some of the files and click the View Files button then go up one in the directory, you can see what I mean:
6) On my machine, that folder is C:\Users\Bill\AppData\Local\Microsoft\Windows\Temporary Internet Files\Enclosure. YMMV. Might want to put the path on your clipboard, you’ll use it a lot.
7) Go to Windows Media Player (henceforth WMP).
8) Click on Organize / Manage Libraries / Music.
9) Click the Add button, and paste the path from Step 5.
10) Click Include Folder, then click OK.
11) Click the little arrow next to Create Playlist and select Create Auto Playlist.
12) Name the new playlist Podcasts.
13) Right click on the new auto play list and select Edit.
14) Click the green plus sign under Music in my Library, scroll to the bottom of the list, and select More.
15) In the Choose a filter dialog, select File Name.
16) Click the Click to Set link, and paste in the path you found back in step 5.
17) Click Ok, then go have a cup of coffee while everything updates.
18) When you get back, plug in your MP3 player. I have the Sony Walkman E Series.
19) Windows Media Player will open the Sync tab. Drag the Podcasts playlist to the Sync pane.
20) Click Sync.
It was a pain, but now it is set up, and I saved $200, plus probably the Zune Pass and 35 accessories I woulda bought. And I think this works better. I’ll have standard process where I bring the player downstairs, plug it in to charge and sync, then come down in the morning to get it. Next post, I might ever write a PowerShell script that automatically syncs when I plug it in. Hope this helps someone!
I was stoked to launch Visual Studio 2010 for the first time and see in the Information bar a listing for 'cloud.'
"This can only mean one thing" I thought. "Cloud services are actually a first class citizen in Visual Studio 2010! Finally."
Well, not completely. If you click on New Project, there is a Cloud Services project type, but it only has one project in it ... wait .. how is that ... Oh. I see. That's not a project, it's a link to download the Azure tools. Aah well.
Nonetheless, i I hear that the Ultimate edition of visual Studio comes with 750 hours of Azure compute time. That should give people a reason to download and give it a try. I know I will.
This site is getting probably twenty spam comments a day. I know that these are inexpensive workers that are paid by the post to get past my Captcha. They say something unrelated and put their employer's URL in the Link field of the post to increase the link count for that URL, thus increasing the Google rank for that post. It is one of the ways that the fake SEO companies 'guarentee' you a top ten ranking for your URL.
I have a message for these people.
All comments on this site are approved by me. I don't approve spam posts. You are wasting your time, and taking money out of your OWN POCKET bothering to spam here. Please leave me alone.
Now back to your regularly scheduled programming.
I was an early adopter of Live ID. I was a Passport user before you could use your own email address; my first passport was email@example.com. After it went to Live ID I set up an ID at firstname.lastname@example.org. I mostly use the hotmail address for personal stuff like xbox, and the pointweb account for professional stuff, like my partnership account.
After the 2006 Author's Summit I learned about the early beta of Office Live, and joined. I created a new ID - email@example.com - specifially for the project, but I included firstname.lastname@example.org in the Office Live account so I could integrate my email.
Long story short, Office Live isn't very good. It is basically the Google Apps, but it costs $20 a month and breaks a lot. So, I went to Google in the summer of last year. I moved my email, and cancelled my Office Live account. All was happy.
Two weeks ago, I figured out that Office Live has been billing me for a year for the service I cancelled. I logged into billing.microsoft.com and cancelled the service. Then I got ready to write an email ripping Office Live a new one. I went back to billing.microsoft.com to get my history ... and couldn't. My email@example.com Live ID account was deleted when I cancelled the service. "Well, that's OK," I thought, "I set up that account just for that reason."
But, they deleted my firstname.lastname@example.org account too.
I couldn't believe it. Looked EVERYWHERE for a phone number - not phone support for Live ID. Put in email tickets. After 32 responses, I gave up. All they did was say "check your password ... account is disabled ... check with Office Live." Office Live, after 40 responses, told me to leave them alone. Not their problem.
So basically, Microsoft screwed me. My Mesh account, my Asure account, my Connect account, Messenger, MSDN, my Partner account, my Live Space, everything is gone. Can't get it back.
Notice something. All of those services are free. Microsoft doesn't care. How could they? I'm not paying them! They are within their rights to delete any of those accounts anytime they want.
It was my fault for trusting them with my information.
We all do this alot. Why pay for software if you can get it for free, right? Free is cheaper, right? Well, no, not when the REAL owner of the software has an attitude like this.
So, I need to not depend on free services. I am getting out of Google too, because if they cancelled things right now, I would be toast. Going back to SmarterMail for my email, or something like it. Something I control; somethign I paid for. I moved my blog back to a server I can touch too (thought I am using free software, but at least it is my build).
Remember this when you recommend something free to a client. They will get what they paid for.
EDIT: Here is some Google fodder: Windows Live ID Error 80048826 means "Your Live ID is gone because the Live ID Database is hopelessly corrupt due to poor architecture and worse implementation. We wish we'd used OpenID too."
When teaching beginners how to pick, I find that quickly they learn that they can hold the lock up to their ear and listen for pins dropping as they release tension on the wrench. If you have lifted pins up at all, the springs will snap them back into position with a little ‘click’. If you know how many pins is in the lock (which you should) they you can ‘see how close you were.’
This doesn’t work.
There are two common errors in beginning lockpicking. The first is too much tension. This is a problem because if you rotate the cylinder within the lock too much, every pin will feel like it is binding. You will hold both pins against the shear no matter what, and you’ll get a very bad level of feedback of actually lifting over the shear line.
The second mistake is overlifting. Few people know how little pressure is required to actually life the key pin, and it is common the just ram the whole pin stack all the way to the roof witho0ut stopping at the shear line – a problem complicated by providing too much tension.
Those two problems combine for a false sense of what is happening inside the lock. If you lift all of the key pins into the shear line – very easy on cheap locks – and then release tension, you’ll be able to hear all of the pins drop. This causes the ‘oh, but I had it, ‘cause I could hear them drop’ problem. The problem is that you didn’t have it, there is nothing wrong with the lock, you just overlifted. It’s a common problem.
The best thing you can do is not listen at all in my opinion. It’s like sniffing the cork when tasting wine – it’s not going to tell you anything. The sommelier offers you the cork to you can make sure it isn’t dry or crumbling – NOW so you can sniff it. Experienced pickers listen at a lock to see if they have something in particular, not just to see if they have any pins lifted. I sometimes listen early on, to see if my feedback is lying to me. I try and set one pin, and then see if it snaps back. I don’t know if it is overlifted, or just jammed into position with too much tension. But I do know if I made one pin stick.
So, don’t listen at the lock, at least when starting out. Trust your fingers, and start with easy locks.
I was very pleased to be able to give my C# 4.0 talk at the Central Ohio .NET Developers Group last month. Carey Payette accepted my offer to give the talk – based on the last section of my upcoming C# All In One book from Wiley – and I did my best to polish the talk to get it to the level expected by the fine people of CONDG. Hope I met everyone’s expectations! The reviews were very nice.
I utterly failed to get any pictures, although @leshka posted this one to TwitPic. I did get some video, which I’ll put on my SpeakerSite after I get it rendered
It was a great turnout – 103 people I think. Wonderful questions too, and some great feedback from many attendees. Bill Melvin posted a review on his blog, which I appreciate. Twitter was rockin with comments from attendees, too. I agree – the fact that the wholesale changes to the language are more or less just for COM compatibility is somewhat disappointing, but the dynamic language features still excite me. Also, Tim, I agree that just because the dynamic keyword exists means we should use it.
I ran without slides, but I did use a big Visual Studio solution. That compressed folder of samples is here, warts and all. Feel free to dig in and see what a warped brain is really like. The snippets don’t travel well, but the sample code is all in the Examples file.
Anyway, great time, everyone, hope to do it again after I finish the research for the Oslo book.