Advantages / Disadvantages of Entity Framework
Michel Posseth •
Advantages : one common syntax ( LINQ / Yoda )
for all object queries wether it is database or not , Pretty fast if
used as intended , easy to implement SoC , less coding required to
accomplish complex tasks
Disadvantages : you have to think in a non traditional way of handling data , not availlable for every database
Disadvantages : you have to think in a non traditional way of handling data , not availlable for every database
Mustafa Naser •
Development wiht EF4 is fast and streamlined.
however you should not consider using EF4 for projects with expected
long opertional lifetme. e.g. 10 years or so. to avoid dependency on the
EF that you dont manage yourself.
Marco Aurelio Campos Baccaro •
We have two points to think about, ORM and EF4.
If your project is small/medium size and less higth performance and no
mission-critical, ORM can be one option. If your project have
requirements to high performance, mission-critical, complex domain and
hard business logical, the ORM isn't indicated. You should consider no
have strong dependency with some framework or technology in solutions
with expected long lifetime.
Rogerio Prudente •
I would add that you could start developping a
solution without the database model using the Entity Framework. That
means: while DBAs are in an infinite discussion about the database model
people could start coding since what they will query would be objects,
not tables/views.
In that sense, could be a good proof of concept. Later on, once the database model is agreed and with a better vision on the availability/mission critical/resources, the data access layer, which was using the Entity Framework could be changed by something else.
A place where Entity Framework is really nice is on the creation of WCF Data Services.
But, like the others said: there is no single answer. It always depends on yours requirements.
Long live and prosper.
In that sense, could be a good proof of concept. Later on, once the database model is agreed and with a better vision on the availability/mission critical/resources, the data access layer, which was using the Entity Framework could be changed by something else.
A place where Entity Framework is really nice is on the creation of WCF Data Services.
But, like the others said: there is no single answer. It always depends on yours requirements.
Long live and prosper.
Shawn Deggans •
I think EF4 is fantastic solution. It's fast,
it's easy to work with, and I've even managed to find ways to use it
effectively in an SOA/Web environment. There are drivers for most of the
major databases. I'm using it with MSSQL and MySQL right now. The only
issue I've had with MySQL had to do with EF4 not handling record locks
correctly. I would use EF4 over any other ORM right now because EF4 is
Microsoft supported--especially if your database is MSSQL. Which means,
they'll either have an upgrade path for the product or at least an
elegant way to deprecate it in the future. As far as long life-cycle
projects go, if you need mission-critical, super fast transaction
processing and you're worried about the upgrade path changing for the
future, don't build on top of a framework. It seems a little silly to
tell someone not to use EF4 for these reasons. If that's a reason not to
use EF4, then you should not use .NET, because it's going to change at
least 9 times in that 10 year lifespan. I think EF4 is a safe learning
investment for the developer and a safe architectural bet for most
enterprise applications.
Manikandan Janakiraman •
Disadvantage!!
If there is any schema change in database FE won’t work!!! You have to update the schema in solution as well!!!
Advantage!!
Its fast and straight forward using LINQ/FE objects For Add/Modify/Delete/Update.
If there is any schema change in database FE won’t work!!! You have to update the schema in solution as well!!!
Advantage!!
Its fast and straight forward using LINQ/FE objects For Add/Modify/Delete/Update.
Juan Romero •
@Manikandan: Well, I don't mean this in a bad way but you have to understand what it is and what it does.
Of course the schema needs to be updated. It's an ORM mapping tool. It is directly dependent on the backend. This is no disadvantage. It's like saying that tires are a disadvantage on a car because when the road has nails and a tire pops, you have to change it.
Of course the schema needs to be updated. It's an ORM mapping tool. It is directly dependent on the backend. This is no disadvantage. It's like saying that tires are a disadvantage on a car because when the road has nails and a tire pops, you have to change it.
Juan Romero •
As I mentioned on my last post, I think it's
probably a better idea to discuss what It does so you can understand its
value and whether it works for you.
The EF is essentially an O/R mapping tool. It takes care of object persistence for you. In other words, it acts as your data layer.
In the past, it has traditionally been considered "good practice" to use stored procedures, create your own DAL and mapping classes. That is no longer the case given the improvements in database technology and the existence of tools such as... you guessed it, the EF. Dynamic SQL code is no longer an issue in terms of performance and it's hard to beat LINQ.
Why reinvent the wheel when you can just have it all done with a tool?
In my mind, the answer is somewhat what Marco Aurelio mentioned in a prior post, except that I think you CAN build mission critical applications with it. In terms of performance, contrary to popular belief, if you design your application properly (in other words, if you know what you are doing) performance does not have to be an issue. On the other hand, if you are talking about millions of transactions per day, then I will concede that you do need a higher level of granularity in order to squeeze every drop of performance you can.
So the obvious advantage is that you no longer have to write persistence code. In fact, with the introduction of Code First, you don't even have to create the database structure!... you can keep your POCOs and simply create a context that derives from a particular class and contains collections of your POCOs and boom, everything is created for you. Imagine all the code you would have had to write to persist every single class and its relationships (aka object graph)...
For the other side of the coin, I somewhat agree with what Mustafa mentioned in a prior post in the sense that you do get tied up to the technology. I mean, you go down this path and be ready to support it for the lifetime of the application. Trust me, replacing your DAL is NOT something you want to do halfway (unless you abstract it properly, which doesn't really go well with mapping tools in general)... however, that doesn't mean you cannot do a 10 year project with it. Why not? If you are a Microsoft shop and you know you will be for the next 10 years and EF seems to only become more popular every day, then by all means, knock yourself out.
The reality of it is that on average we work on small to medium size projects most of the time and in my personal experience, EF suffices and surpasses expectations on that area. If you really work in Galactic Navigation Systems for the next generation of space shuttles, then you will probably not have to think about this anyways as a group of highly paid geeks will decide for you, lol...
By the way, EF is not the only one out there. There are other products such as NHibernate that do a pretty decent job as well.
I hope that helps!
The EF is essentially an O/R mapping tool. It takes care of object persistence for you. In other words, it acts as your data layer.
In the past, it has traditionally been considered "good practice" to use stored procedures, create your own DAL and mapping classes. That is no longer the case given the improvements in database technology and the existence of tools such as... you guessed it, the EF. Dynamic SQL code is no longer an issue in terms of performance and it's hard to beat LINQ.
Why reinvent the wheel when you can just have it all done with a tool?
In my mind, the answer is somewhat what Marco Aurelio mentioned in a prior post, except that I think you CAN build mission critical applications with it. In terms of performance, contrary to popular belief, if you design your application properly (in other words, if you know what you are doing) performance does not have to be an issue. On the other hand, if you are talking about millions of transactions per day, then I will concede that you do need a higher level of granularity in order to squeeze every drop of performance you can.
So the obvious advantage is that you no longer have to write persistence code. In fact, with the introduction of Code First, you don't even have to create the database structure!... you can keep your POCOs and simply create a context that derives from a particular class and contains collections of your POCOs and boom, everything is created for you. Imagine all the code you would have had to write to persist every single class and its relationships (aka object graph)...
For the other side of the coin, I somewhat agree with what Mustafa mentioned in a prior post in the sense that you do get tied up to the technology. I mean, you go down this path and be ready to support it for the lifetime of the application. Trust me, replacing your DAL is NOT something you want to do halfway (unless you abstract it properly, which doesn't really go well with mapping tools in general)... however, that doesn't mean you cannot do a 10 year project with it. Why not? If you are a Microsoft shop and you know you will be for the next 10 years and EF seems to only become more popular every day, then by all means, knock yourself out.
The reality of it is that on average we work on small to medium size projects most of the time and in my personal experience, EF suffices and surpasses expectations on that area. If you really work in Galactic Navigation Systems for the next generation of space shuttles, then you will probably not have to think about this anyways as a group of highly paid geeks will decide for you, lol...
By the way, EF is not the only one out there. There are other products such as NHibernate that do a pretty decent job as well.
I hope that helps!
Steve G •
@Juan
I disagree with the dismissal of stored procedures. I recently converted an EF 4.1 LINQ database search to a stored procedure and got about a 20 times increase in performance of that search. Yes, there still are good reasons for using stored procedures.
Also, Code First is a nifty technology, but it's no replacement for a well-designed database.
I disagree with the dismissal of stored procedures. I recently converted an EF 4.1 LINQ database search to a stored procedure and got about a 20 times increase in performance of that search. Yes, there still are good reasons for using stored procedures.
Also, Code First is a nifty technology, but it's no replacement for a well-designed database.
Shawn Deggans •
Don't get caught in the trap that you have to use
one or the other. Most ORMs including EF4 support Stored Procedures.
You can have the best of both worlds.
Juan Romero •
@Steve: That simply means the LINQ code was
poorly written. For instance, if you cast your query to
IEnumerable<T> instead of IQueryable<T> and your query
contains an ORDER clause, it will cause LINQ to produce SQL that pulls
all the records from the table and then conducts the ordering in memory.
Casting it as IQueryable<T> will result in the correct SQL code
issued (this of course happens because of delayed execution). I would
not be surprised if this was the case with the application you
converted.
It is a fact that whether you use stored procedures or EF-generated SQL, the query optimizer will treat the queries equally. An execution plan will be created, cached and reused. Dynamic SQL is equally cached as stored procedures. There are tons of performance test results and articles on the Internet that support my claims. I found one real quick with Google for you but feel free to look for more:
http://lennilobel.wordpress.com/2009/08/01/rethinking-the-dynamic-sql-vs-stored-procedure-debate-with-linq/
I am sure there are situations where you could squeeze a millisecond here and there by using SPs (e.g. limit fields, etc) but the gain is really negligible unless your are working on a serious application such as a 40 million-hit-a-month website (which I have worked on, by the way). You should be focusing your efforts on your indexes and maintenance plans instead.
As far as Code First, you have to look at it from a business perspective. Rapid prototyping and meeting deadlines is what business owners care about. They don't care if the database is well designed. Besides, the pace at which technology moves nowadays, your applications become obsolete before you even find out if your database is well designed. Furthermore, more times than not you simply don't have enough time to sit down and properly design a database, especially since you can't foresee where the product will go down the road. You do your best with what you have and somewhere down the line things change and you realize your database really isn't well designed because of the above-mentioned, LOL...
That's just how it is.
It is a fact that whether you use stored procedures or EF-generated SQL, the query optimizer will treat the queries equally. An execution plan will be created, cached and reused. Dynamic SQL is equally cached as stored procedures. There are tons of performance test results and articles on the Internet that support my claims. I found one real quick with Google for you but feel free to look for more:
http://lennilobel.wordpress.com/2009/08/01/rethinking-the-dynamic-sql-vs-stored-procedure-debate-with-linq/
I am sure there are situations where you could squeeze a millisecond here and there by using SPs (e.g. limit fields, etc) but the gain is really negligible unless your are working on a serious application such as a 40 million-hit-a-month website (which I have worked on, by the way). You should be focusing your efforts on your indexes and maintenance plans instead.
As far as Code First, you have to look at it from a business perspective. Rapid prototyping and meeting deadlines is what business owners care about. They don't care if the database is well designed. Besides, the pace at which technology moves nowadays, your applications become obsolete before you even find out if your database is well designed. Furthermore, more times than not you simply don't have enough time to sit down and properly design a database, especially since you can't foresee where the product will go down the road. You do your best with what you have and somewhere down the line things change and you realize your database really isn't well designed because of the above-mentioned, LOL...
That's just how it is.
Juan Romero •
@Shawn: You are absolutely correct. You can still use your SPs as function imports.
Steve G •
@Juan -Or not. In my case I grabbed the generated
SQL from LINQ using SQL Profiler; the SQL generated was good.
Furthermore, the SQL in the stored proc was essentially the same as the
SQL generated by LINQ - selecting the same data from the same table
using the same search criteria. And yes, I still saw a huge increase in
speed using Stored Procedures.IMHO, SQL Server is going to take a
serious reputation hit as "slow" when the real problem is auto-generated
tables that are not properly indexed for their actual use. "Code First"
and "Database First" both ignore the overall system and the natural
impedance mismatch between code and database. A holistic view of data
and database management works far better and results in fewer bugs and
better performance. Just slapping something together because "it'll be
replaced before its underperformance shows" is not professional. Speed
is great when you're doing a prototype; getting it right means you'll
get the call back for the next project.
Srikanth Gupta •
writing applications using entityframework saves
lot of development time and its good..if we are developing enterpraise
applications, we need to think about perdormance..all in all its good
foe small applications..
Juan Romero •
@Steve: How many times did you run the queries to
compare the performance? My guess is you did it once only. You have to
give the query optimizer a chance to create and cache the execution
plan. No disrespect but just listen to what you are saying. You are
telling me you are executing the same SQL and getting two significantly
different measurements because the source is different? Sorry but that
just doesn't make sense. I would think SQL Server knows better than
that. I simply can't imagine it works the way you describe. I wouldn't
play for such product.
Code First doesn't mean you are not going to index your tables. It's only the starting point.
Ask any serious project manager and you will see what I mean. It is NEVER about making the perfect product. It's about doing it on time, on budget and within scope. The business doesn't care what is under the hood. All they care about is getting something that works and unfortunately most of the time that is just what they get because of tight deadlines. Where do you think bugs come from? Otherwise we wouldn't need so many patches from Microsoft now would we? :-)
I am going to give you an analogy that was presented to me by a manager I am very grateful to for sharing it with me a long time ago. It went something along the lines of:
"If I ask you to build a car by Friday, have it ready by Friday. I don't want a Mercedes or BMW if you will deliver it next week, it will cost more to make and it has the latest technology. It's OK if you give me a Toyota. Just give me a car that runs and does what I asked for. It doesn't have to be PERFECT. We will fix what is under the hood later if the need be".
(I drive a Toyota by the way :-)
This my friend is the reality of business. I don't know about everyone else here but I for one am yet to see a perfectly designed database or application. In 15 years doing this, EVERY application I have ever inherited had issues. Whether it was database or code. Even expensive commercial products I have worked on ($40K+ basic package) had issues in one way or another.
Code First doesn't mean you are not going to index your tables. It's only the starting point.
Ask any serious project manager and you will see what I mean. It is NEVER about making the perfect product. It's about doing it on time, on budget and within scope. The business doesn't care what is under the hood. All they care about is getting something that works and unfortunately most of the time that is just what they get because of tight deadlines. Where do you think bugs come from? Otherwise we wouldn't need so many patches from Microsoft now would we? :-)
I am going to give you an analogy that was presented to me by a manager I am very grateful to for sharing it with me a long time ago. It went something along the lines of:
"If I ask you to build a car by Friday, have it ready by Friday. I don't want a Mercedes or BMW if you will deliver it next week, it will cost more to make and it has the latest technology. It's OK if you give me a Toyota. Just give me a car that runs and does what I asked for. It doesn't have to be PERFECT. We will fix what is under the hood later if the need be".
(I drive a Toyota by the way :-)
This my friend is the reality of business. I don't know about everyone else here but I for one am yet to see a perfectly designed database or application. In 15 years doing this, EVERY application I have ever inherited had issues. Whether it was database or code. Even expensive commercial products I have worked on ($40K+ basic package) had issues in one way or another.
Juan Romero •
@Srikanth: Agreed. Realistically, what's the
ratio for the average developer of work on small vs. large, enterprise
level applications?
I personally think most of the time it's the first one.
I personally think most of the time it's the first one.
Steve G •
@Juan
Yes, I'm well aware of SQL Caches, query plans and cache poisoning and all the other fun things to do to the SQL optimizer. I specifically cleared the cache and then ran the sample tests several times to cover both query plan generation and cached plan runs. In my case, I did see a large change in performance both in testing and in the final implementation. The before and after was really really noticeable.
As for creating a Mercedes instead of a Toyota, you're preaching to the choir. Obviously, you don't know me and my background or that I've spent most of my career preaching 'Toyotas' over 'Mercedes and BMW's (to use your analogy). The problem I have with limiting yourself to 'Code First' is that the indexing and optimization (i.e. taking that necessary second look at the database) simply isn't likely happen. Consider the situation where Code First would be used: Developers who need a database right away and don't have a DBA available - which is *why* they would go with Code First in the first place. Not having a DBA look over the final database for indexing and optimizations is simply a given in this situation.
Coders and DBAs already speak a different language and I've met many many coders who think that DBAs are fundamentally unnecessary. Code First is just going to be fodder to their arguments that DBAs are just expensive paperweights. We both know coders - it works, therefore it's finished. It passes all the tests, things are great and nobody has any idea why it's slow in production.
Don't get me wrong - I like Code First, and the simple database interactions in EF 4.1. I've been wanting something like this for a long time. The problem I have with it is that the reality of it's use is going to be coders trying to be ersatz DBA's and failing miserably.
Yes, I'm well aware of SQL Caches, query plans and cache poisoning and all the other fun things to do to the SQL optimizer. I specifically cleared the cache and then ran the sample tests several times to cover both query plan generation and cached plan runs. In my case, I did see a large change in performance both in testing and in the final implementation. The before and after was really really noticeable.
As for creating a Mercedes instead of a Toyota, you're preaching to the choir. Obviously, you don't know me and my background or that I've spent most of my career preaching 'Toyotas' over 'Mercedes and BMW's (to use your analogy). The problem I have with limiting yourself to 'Code First' is that the indexing and optimization (i.e. taking that necessary second look at the database) simply isn't likely happen. Consider the situation where Code First would be used: Developers who need a database right away and don't have a DBA available - which is *why* they would go with Code First in the first place. Not having a DBA look over the final database for indexing and optimizations is simply a given in this situation.
Coders and DBAs already speak a different language and I've met many many coders who think that DBAs are fundamentally unnecessary. Code First is just going to be fodder to their arguments that DBAs are just expensive paperweights. We both know coders - it works, therefore it's finished. It passes all the tests, things are great and nobody has any idea why it's slow in production.
Don't get me wrong - I like Code First, and the simple database interactions in EF 4.1. I've been wanting something like this for a long time. The problem I have with it is that the reality of it's use is going to be coders trying to be ersatz DBA's and failing miserably.
Juan Romero •
@Steve: I still don't see how you can get significantly different readings with the same SQL statement.
Your assumptions with Code First are somewhat incorrect IMHO. Like I said before, Code First is only the beginning. Eventually the database can be be analyzed and improved by a DB. Indexes can be specified during database creation through initializers. I do concede though that the technology as it exists now is not feasible for enterprise-level applications.
I think DBAs are VERY necessary, but then again it depends on a few things. Some companies don't have DBAs but function just fine with developers that have some DB knowledge. This however is mainly due to the size and scope of the projects (e.g. simple websites). Larger companies with complex software that is part of their main revenue stream HAVE to have DBAs. The reality of it is that I may be an excellent developer with great DB skills, but I simply cannot beat a DBA. He/She breeds and eats SQL every day... :-)
We as coders are not trying to be DBAs. Object persistence is an aspect that we have to take care of and from our perspective, Code First is a great tool since it takes care of all the heavy lifting for us while allowing us to keep our POCOs.
Again, I think Code First is fine for your [average] small to mid-size project, which is what we work on most of the time. In such scenario, performance is not really paramount and we can usually spare a second or two when loading something.
If the volume of information you expect is significant on the other hand, performance obviously becomes one of the key aspects so you will design your application accordingly.
Going back to the original conversation, EF supports POCOs without Code First too so that's a nice middle ground.
I think the takeaway is that EF is great for small to mid-size applications. High-Volume applications where performance is critical require more granular control over the persistence layer.
Your assumptions with Code First are somewhat incorrect IMHO. Like I said before, Code First is only the beginning. Eventually the database can be be analyzed and improved by a DB. Indexes can be specified during database creation through initializers. I do concede though that the technology as it exists now is not feasible for enterprise-level applications.
I think DBAs are VERY necessary, but then again it depends on a few things. Some companies don't have DBAs but function just fine with developers that have some DB knowledge. This however is mainly due to the size and scope of the projects (e.g. simple websites). Larger companies with complex software that is part of their main revenue stream HAVE to have DBAs. The reality of it is that I may be an excellent developer with great DB skills, but I simply cannot beat a DBA. He/She breeds and eats SQL every day... :-)
We as coders are not trying to be DBAs. Object persistence is an aspect that we have to take care of and from our perspective, Code First is a great tool since it takes care of all the heavy lifting for us while allowing us to keep our POCOs.
Again, I think Code First is fine for your [average] small to mid-size project, which is what we work on most of the time. In such scenario, performance is not really paramount and we can usually spare a second or two when loading something.
If the volume of information you expect is significant on the other hand, performance obviously becomes one of the key aspects so you will design your application accordingly.
Going back to the original conversation, EF supports POCOs without Code First too so that's a nice middle ground.
I think the takeaway is that EF is great for small to mid-size applications. High-Volume applications where performance is critical require more granular control over the persistence layer.
I don't think the problem with Code First will be coders like you. Unfortunately, I see all too often coders who fail to grasp that what the DBA does is both important and necessary. Kind of a "If I don't understand it, it must be easy" sort of mentality. (I run into this in managers all the time.) If more coders were like you and understood and valued the role of the DBA - and vise versa - I wouldn't expect there to be any problems with either a 'Code First' or 'Database First' approach.
Why need to MVC in ASP.net?
Michel Posseth •
To write well structured automaticly testable code , to write code by a true SoC pattern
just to call two
just to call two
shadab khan •
MVC Is a secure and niter architectural pattern
Woon Cherk Lam •
To better support for styleing and client-side
scripting... Although the newer versions of webform has improved on this
aspect... :P
Gediminas Bukauskas •
I switched to ASP.MVC 3 2 years ago and found it very attractive because:
1. Excellent integration with Visual Studio test projects – extending and support of large ASP.NET Windows Forms projects is very complicated. You must verify every page, every button and link before launching modified version. There are some testing automation tools but they are expensive and I had no them. Launching automated tests in MVC captures all wrong all corrections in one pass. Of course it is true when you are writing following “Test first” template.
2. Easy integration with JavaScript tools. I am using MS AJAX + jQuery but DoJo, Ext JS 4, Kendo are good too.
3. Extensibility – you have a lot of extension points working with MVC.
1. Excellent integration with Visual Studio test projects – extending and support of large ASP.NET Windows Forms projects is very complicated. You must verify every page, every button and link before launching modified version. There are some testing automation tools but they are expensive and I had no them. Launching automated tests in MVC captures all wrong all corrections in one pass. Of course it is true when you are writing following “Test first” template.
2. Easy integration with JavaScript tools. I am using MS AJAX + jQuery but DoJo, Ext JS 4, Kendo are good too.
3. Extensibility – you have a lot of extension points working with MVC.
Matthew Egan •
I don't use it because it's cumbersome. I happen to agree with the Question..
Honestly, I think it's useless. You can do all the same structuring without using MVC and have better control over the product.
Here's my point...
http://www.asp.net/mvc/videos/mvc-2/how-do-i/5-minute-introduction-to-aspnet-mvc
I am assuming this was done with MVC and it crashes on me. So, nope. Garbage.
Honestly, I think it's useless. You can do all the same structuring without using MVC and have better control over the product.
Here's my point...
http://www.asp.net/mvc/videos/mvc-2/how-do-i/5-minute-introduction-to-aspnet-mvc
I am assuming this was done with MVC and it crashes on me. So, nope. Garbage.
Matthew Egan •
AJAX is also horrible solution; I know cross
domain accessors are required but really? That was a hack project to get
around the issue that originated in around 2000. Some how people took
it as the solution to a problem.
http://en.wikipedia.org/wiki/Ajax_%28programming%29
I would suggest looking up Cross-Domain Accessing in .NET rather then using AJAX or ActiveX.
http://en.wikipedia.org/wiki/Ajax_%28programming%29
I would suggest looking up Cross-Domain Accessing in .NET rather then using AJAX or ActiveX.
Gediminas Bukauskas •
@Matthew
Don’t be confused – MS AJAX library ports .NET structures into JavaScript world. You can use namespaces, Interfaces, Inheritance, custom events and much more. My lovely toy in this library is Client Component that allows me to manage life cycle of the page on client side. MS Ajax call for service is one of the functions presented in this library.
MS AJAX library was worked out for SilverLight 1.x and is a bit old. I am going to switch on Ext JS 4.1 but need some practice before switching.
Don’t be confused – MS AJAX library ports .NET structures into JavaScript world. You can use namespaces, Interfaces, Inheritance, custom events and much more. My lovely toy in this library is Client Component that allows me to manage life cycle of the page on client side. MS Ajax call for service is one of the functions presented in this library.
MS AJAX library was worked out for SilverLight 1.x and is a bit old. I am going to switch on Ext JS 4.1 but need some practice before switching.
Robin Stemp •
Here's a few reasons out of many:
-To improve Unit Testing (tho again debatable)
-More control of the client output (though .net 4 webforms improves on previous versions)
- No Viewstate to begin with, leaner html
- No Extra HTML emitted by default, closer to how web pages actually work
- MVC - Structure and separation of code parts in a fairly standardized design pattern
- SEO Friendly URLS out of the box
- More pluggable
ASP.NET Webforms design is similar to Winforms apps, With winforms, all the state related to control values and events fired are handled within the application. Webpages are stateless meaning nothing is remembered between calls. So webforms store control values in the VIEWSTATE variable within html , this gets passed back to the server every postback (say an onclick event)... While this has worked for many years it has its issues. Also up to .Net 4.0 it was a real nuisance when using things like "ID" within attributes as .net would change the id clientside to the controls heirachy. This would mean having to use messy code such as CLIENTID within the view pages.
With MVC, you have more control over html, css, and javascript that gets output, work more in the design of the web and its stateless nature, and also more in line with how other platforms work, such as php, python, ruby, etc.
-To improve Unit Testing (tho again debatable)
-More control of the client output (though .net 4 webforms improves on previous versions)
- No Viewstate to begin with, leaner html
- No Extra HTML emitted by default, closer to how web pages actually work
- MVC - Structure and separation of code parts in a fairly standardized design pattern
- SEO Friendly URLS out of the box
- More pluggable
ASP.NET Webforms design is similar to Winforms apps, With winforms, all the state related to control values and events fired are handled within the application. Webpages are stateless meaning nothing is remembered between calls. So webforms store control values in the VIEWSTATE variable within html , this gets passed back to the server every postback (say an onclick event)... While this has worked for many years it has its issues. Also up to .Net 4.0 it was a real nuisance when using things like "ID" within attributes as .net would change the id clientside to the controls heirachy. This would mean having to use messy code such as CLIENTID within the view pages.
With MVC, you have more control over html, css, and javascript that gets output, work more in the design of the web and its stateless nature, and also more in line with how other platforms work, such as php, python, ruby, etc.
No comments:
Post a Comment