How on earth does dependency injection make sense?
-
So, in the another episode of "I want my architecture to make sense and I have no idea what I'm doing" - I want to get with the times and have a proper way of handling dependencies between layers. I did some research, and the general consensus is "use dependency injection, duh, that's what all the cool kids do!"
But the more I look at it, the less sense it makes to me. I mean, let's say I have a class like:
public class CarMechanic { private Wrench _wrench = new CombinationWrench("5/8"); private Screwdriver _screwdriver = new PhilipsScrewdriver(); public void FixCar(Car c) { _wrench.ApplyTo(c.Engine); _screwdriver.Screw(c.Engine.LeftScrew); } }
That's the most basic and seemingly most sensible design. A car mechanic has his personal tools, and all he exposes to the public is that he fixes cars. Now the DI crowd says "wait, but what if you want to check if the mechanic is doing the job right? Let's say you have a:"
public class SpyingPhilipsScrewdriver : PhilipsScrewdriver { public override void Screw(Screw s) { base.Screw(s); Logger.Log("A screw " + s.ToString() + " was being screwed"); } }
"Now you'd need to break into the mechanic's and replace the screwdriver!" And that's bad. Okay.
But the question is: isn't the alternative even worse? Let's say I use the
CarMechanic
class as follows:public class Traveler { //... protected void Drive(string destination) { try { this.Car.Drive(destination); } catch (CarException) { var mechanic = new CarMechanic(); mechanic.FixCar(this.Car); } } }
And it makes sense. But with what I know of DI, and if we try the standard constructor injection, now we need to do:
catch (CarException) { var mechanic = new CarMechanic(new CombinationWrench("5/8"), new PhilipsScrewdriver()); mechanic.FixCar(this.Car); }
In other words, you call the mechanic, and he tells you "well, if you want me to do my job, you need to give me a screwdriver and a wrench, and it's up to you to research what kind of wrenches and screwdrivers are around." To which the right answer is "why should I give a shit whether you need a wrench, a screwdriver, or a hydraulic jack? All I know about you is that you fix cars. Fix my fucking car. Now."
In short, it makes no sense from the modeling-reality standpoint, and it doesn't seem to make sense from the programming standpoint, since the middle layer depends on the top layer to know about the bottom layer, which seems to be a total clusterfuck. For a non-car analogy, if I write my UI code, I want to know nothing about an SQL provider my business layer uses - that's the point of the business layer, to abstract this shit out! But instead, the business layer can't work if the UI layer doesn't provide it with right components from the data layer.
How does that even make sense? Did they solve that in any way? Or am I totally wrong about how DI works?
-
I am no expert at DI and I hate design patterns in general, but what I'd do is a builder class with bunch of static methods, which you would use in the way you would use constructors if you weren't trying to achieve DI.
-
What about having multiple constructors?
-
Traveller should be injected as well.
public class World public static void main() { Screwdriver screw = new PhilipsScrewdriver(); Wrench wren = new CombinationWrench("5/8"); Car car = new Beetle(1952,"blue"); Mechanic mech = new Mechanic(wren, screw); Traveler tourist = new Traveler(car,mech); } }
This is so that you can do the same thing in a unit test but with mock objects:
public void testCarCrash() { Car car = new AutoCrashingCarMock(); Mechanic mech = new fakeMechanic(null, null); Traveler touristUnderTest = new Traveler(fakeMechanic,car); touristUnderTest.Drive("hell"); assert.true(autoCrashingCarMock.DriveWasCalled()); assert.true(mech.FixCarWasCalled()); }
Otherwise, if the Mechanic does a database call, in order to test the Traveller's drive method, you have to have a full stack including valid data in the database, and to test the error condition, you have to actually crash the Car object.
if I write my UI code, I want to know nothing about an SQL provider my business layer uses - that's the point of the business layer, to abstract this shit out!
Exactly right. And to test that UI code, you don't want to have to have a real database handy, either.
-
In short, it makes no sense from the modelling-reality standpoint
Actually yes, it does.
I'll try to run with your car mechanic example. Mechanics in a repair shop don't each own their wrenches, they get them from the shop. A mechanic may work in any workshop, but he may get different (brand, material, etc.) tools at each shop. So the
CarMechanic
class should indeed get the tools injected.However, you correctly realize it has to stop somewhere. The customer does not care what tools the mechanic needs. That's why the workshop is there. It provides all the tools that the mechanic needs to just fix the damn car, now.
In code form, you'll have a factory:
public class RepairShop { private Whatever wrenchSize; private Screw screwType; // ... public CarMechanic GetMechanic() { return new CarMechanic(new CombinationWrench(wrenchSize), new PhilipsScrewdriver(screwType)); } } }
That way you can test
CarMechanic
with test tools, you can reuseCarMechanic
for different workshops that may use different tools and still don't need to know anything about wrenches inTraveller
.In the programming example it is similar. The business logic will come with some with a factory function/class that will read configuration to get the database connection string (which may include database type) and constructs the logic object with correct database provider injected.
So the overall approach is: use DI for individual classes to make them easier to reuse and possible to test in isolation and for each larger component create a function (class) that ties that layer together so the dependencies do not propagate to the next layer.
... and since that factory function should itself be simple, it still makes it simple to create different factory that will tie mock components for purpose of testing.
@Yami has a good point that
Traveller
also needs to get the mechanic injected. In what I suggest that meansTraveller
should get theRepairShop
injected. It should, definitely. You tie each layer together in a factory and inject that factory to the higher layer, so you can inject a mock factory in test instead.
-
Or maybe better summed like this:
```
catch (CarException)
{
var mechanic = new CarMechanic();
mechanic.FixCar(this.Car);
}would mean that the `Traveller` _trains his own_ mechanic. Well, to do that he needs to know about wrenches! However that's not how it works. `Traveller` does _not train_ mechanic, he _finds_ one. In programming terms either: - gets him injected as a dependency (“the boss told him: ‘If your car breaks, call that guy’” and that guy has a workshop with all the wrenches already). - gets `YellowPages` injected as a dependency (a factory class) and calls `YellowPages.FindMechanic`, which again returns fully constructed mechanic with all wrenches he needs.
-
Traveller should be injected as well.
With a mechanic? Kinky stuff thread, etc...
(jokes aside, yeah, I wanted to omit it for brevity - you have to stop somewhere with this). Might have muddled the picture.
public static void main() { Screwdriver screw = new PhilipsScrewdriver(); Wrench wren = new CombinationWrench("5/8"); Car car = new Beetle(1952,"blue"); Mechanic mech = new Mechanic(wren, screw); Traveler tourist = new Traveler(car,mech); }
That's nasty, though. Your
Main()
is now concerned with finding the right tools for everyone, and there's zero separation of concerns. I get that it makes mocking stupid easy, but now you need to handle all bits and pieces at the top layer.use DI for individual classes to make them easier to reuse and possible to test in isolation and for each larger component create a function (class) that ties that layer together so the dependencies do not propagate to the next layer.
So no silver bullet, then.
gets him injected as a dependency (“the boss told him: ‘If your car breaks, call that guy’” and that guy has a workshop with all the wrenches already).
Well that makes things worse, since now the boss is concerned with finding mechanics for his people, and since a mechanic needs wrenches to work, the $10,000-suited guy is going to have to run around hardware shops. (or, in other words, you have your topmost, say UI layer managing deep innards of the data layer).
gets YellowPages injected as a dependency (a factory class) and calls YellowPages.FindMechanic, which again returns fully constructed mechanic with all wrenches he needs.
That's better, though. I don't think it solves all the problems (you still need a boss who hands out Yellow Pages to his drivers), but that's something.
AIUI, to test it, you'd just change the boss to hand out fake phonebooks (mock factory providing CarMechanics with plastic tools)?
-
Your Main() is now concerned with finding the right tools for everyone, and there's zero separation of concerns
Yeah, if it's much more complicated than that, I'd use the factory pattern like @Bulb was saying. Lately I work a lot in Javascript, where you'd have the page code be responsible for finding the models and views that are used on this page and linking them together, but the models and views don't know or care where to find each other. Typically the specific model and view for a page are hardcoded like my example because in front-end javascript there's only so much complexity.
-
So no silver bullet, then.
No, there are no silver bullets. And KISS trumps all design patterns.
Well that makes things worse, since now the boss is concerned with finding mechanics for his people, and since a mechanic needs wrenches to work, the $10,000-suited guy is going to have to run around hardware shops. (or, in other words, you have your topmost, say UI layer managing deep innards of the data layer).
In the real world the rabbit hole goes deep (the boss has a clerk who had put up a bid and selected (that still does not count as construction, only getting from somewhere else) some mechanics and signed a contract with them etc.). In a programming project it has to stop somewhere.
That's better, though. I don't think it solves all the problems (you still need a boss who hands out Yellow Pages to his drivers), but that's something.
From the programming standpoint it isn't that different. Just different degree to which you split the factories.
In each project you have to find a balance between having the configuration (= taking correct database connection string, connecting to a database and injecting the connection to the logic class) structured and having too many classes.
In a smaller project a
main
that constructs a bunch of classes across all layers is often fine. You can always split it later as the project grows.
-
That's nasty, though. Your Main() is now concerned with finding the right tools for everyone, and there's zero separation of concerns. I get that it makes mocking stupid easy, but now you need to handle all bits and pieces at the top layer.
According to one uncle Bob, that's exactly what Main() is for. The application should be split into two parts - one part consists of all the building blocks for your business logic (all those classes mentioned by you), and the other part connects those blocks together via injecting the right dependencies when the application launches - which happens in Main().
-
The third alternative that nobody talks about is using a DI Framework like Spring to handle all this for you, moving all the complexity to the magic black box so you don't have to ever encode the logic around which dependencies are injected where.
I'm not a big fan of that either.
-
using a DI Framework like Spring
Basically a DI framework is just a bloated configuration file parser with a bit of scripting that allows it to construct classes specified in the configuration with parameters specified also there.
There is no that much difference between writing the constructor calls in the principal language and putting the values in an
.ini
or.properties
file and writing the constructor calls and values both in some fancy XML. It does not seem to me the later should be so much less typing and it does not save any thinking anyway. And don't expect the administrators to be able to switch the classes anyway, so such configuration is part of the program itself anyway.DI frameworks appear to be popular in Java, perhaps because Java is bureaucratic and has buzzword-compliant community . Not so much elsewhere.
-
I'm not sure if you're looking for help or opinions, but I don't believe it does make sense. Or, rather: it makes sense where it makes sense, and doesn't make sense where it doesn't, and militantly using it everywhere would be a terrible idea. (For example, maybe you have Mechanic( JapaneseCars, KoreanCars ) and Mechanic( AmericanCars, EuropeanCars). That makes more sense to me than specifying which wrench they need specifically, and is much closer to how things work in real life.)
Bulb has the right idea: KISS is the one and only "design pattern" you really need. If you don't need it yet, don't do it yet. Keep the code straightforward, make it obvious what it does and why, reduce spooky action at a distance, and keep on keepin' on.
-
Mechanics in a repair shop don't each own their wrenches, they get them from the shop.
You'd think so, but:
That's really more an issue with the analogy than the concepts, of course.
-
<ObSmugLispWeenie>
Yes, but in a Real Language™, you could wrap it all up in macros that hide that from you! Just how you would do that is Not My Problem, of course.
</ObSmugLispWeenie>Who says I can't laugh at myself?
-
Yes, but in a Real Language™, you could wrap it all up in macros that hide that from you!
In Java you can wrap it all up in XML that hide that from you!
As I already said, in the end it does not save any thinking.
-
in short, it makes no sense from the modeling-reality standpoint, and it doesn't seem to make sense from the programming standpoint, since the middle layer depends on the top layer to know about the bottom layer, which seems to be a total clusterfuck
I've only seen dependency injection used as a means to allow you to unit test your code. The advantage is that it allows you to easily pass dummy data into your code, but it does have drawbacks. It adds noise to your code and makes it more complex (especially when you use multiple data sources).
I think one of the problems you're having is the car/mechanic example. Let's look at it from a data access point of view instead.
Here's a very simple DAL class that gets information about website members.
Disclaimer: I've not compiled any of this, I've just pulled it out my ass right now, so there may be typos and mistakes
public class MemberService { public IEnumerable<Member> GetMaleMembers() { using (var context = new MemberDataContext()) { var query = from m in context.Members where !m.IsFemale select m; return query.ToList(); } } }
Now, let's consider how to unit test this class. We'd ideally like to do something like this
//set up our test data var members = new [] { new Member { IsFemale = true }, new Member { IsFemale = false }, }; var service = new MemberService(); var result = service.GetMaleMembers(); //look for the expected result Assert.IsTrue(result.Count() == 1, "Expected 1 male member"); members = new [] { new Member { IsFemale = true }, new Member { IsFemale = true }, }; result = service.GetMaleMembers(); //look for the expected result Assert.IsFalse(result.Any(), "Expected no male members");
The problem is, how do we get the test data into the service?
With a couple of minor changes to the class defenition (and some new interfaces to help us out) we can do this:
public class MemberService { public MemberService() {} //new constructor that accepts a dependency injectable data context, //NOTE: this accepts an IMemberDataContext, that means it doesn't need to be a real MemberDataContext public MemberService(IMemberDataContext context) { _context = context; } private IMemberDataContext _context = null; public IEnumerable<Member> GetMaleMembers() { //use the DI context if there is one, otherwise use a new real context using (var context = _context ?? new MemberDataContext()) { var query = from m in context.Members where !m.IsFemale select m; return query.ToList(); } } }
which then allows us to pass our test data into the service for our unit test
//set up our test data var context = new FakeMemberDataContext(); context.Members = new [] { new Member { IsFemale = true }, new Member { IsFemale = false }, }; //pass the test data into the service var service = new MemberService(context); var result = service.GetMaleMembers(); //check for the expected results Assert.IsTrue(result.Count() == 1, "Expected 1 male member"); //set up the data for the second part of the test context.Members = new [] { new Member { IsFemale = true }, new Member { IsFemale = true }, }; result = service.GetMaleMembers(); //check for the expected results Assert.IsFalse(result.Any(), "Expected no male members");
in other words, you have your topmost, say UI layer managing deep innards of the data layer
You're correct, you do need implementational knowledge of the thing that you're injecting a dependency into. This isn't a problem when you're unit testing, because you're expected to have knowledge of what you're testing. After all, you have to know what business rules you're testing for.
I hope this has helped clarify it a little.
-
Speaking of the magic black box that @Yamikuronue doesn't like, here's what that looks like in Java:
public abstract class GenericDao<T,H> { @PersistenceContext protected EntityManager em; // other code }
That would tell the JavaEE server injecting an
EntityManager
into the class1.@PersistenceContext
is part of the JPA standard and marks that the variable it annotates should be injected by the server... and that it should specifically be an EntityManager. It could be an EntityManager from Hibernate, TopLink/EclipseLink, OpenJPA, or even the one IBM uses internally in WebSphere. The point is that you don't care in the application code which of the implementations it actually is, only that it uses the defined interface.Edit:
For reference, there is some stuff going on behind the scenes to make this work.
1Yes, it's technically not this class, but the child of this class because it's
abstract
. I was too lazy to write an example and copied this from existing code.
-
That's the most basic and seemingly most sensible design.
Except that now your
CarMechanic
only knows how to use a very specificWrench
andScrewdriver
. Want to use a7/8
wrench instead? Well screw you!DI says “get these set up from outside the class, by configuration”. And that's the real core of it. Everything else follows from that. If you want to have lots of different mechanics using slightly different tools but otherwise working the same, it's trivial. Yes, it does mean that you're pushing the configuration problem up a level (or maybe several levels) but it eventually means you can end up separating the “business logic” (how things work) from the configuration (what bits work with what). It turns out that's actually pretty helpful.
If you're DI-enabling your code, the things to look out for are anywhere you use
new
(especially in a constructor or outside a line-of-business method) or where you call a singleton. Either of those might be a spot where you should inject the dependency instead. Not always though; use sense and good taste.
-
I've only seen dependency injection used as a means to allow you to unit test your code.
Unit tests are overrated. Basically you should always have an automated full-stack test. Possibly with ligher-weight database backend than in production, but that's about it; the rest should run as in the application. You need that kind of tests because bug may occur due to interaction between the components that are difficult to find with mocks and since those tests find the bugs in the components too, you don't really need the finer-grained tests. They can still be useful for debugging, but they are also more code that you need to modify when refactoring (where the larger-block tests often don't care), so they give diminishing returns.
Instead where you realize the true value of dependency injection is when you need to reuse or refactor the code. If the components can reach each other directly (via singletons, well known instances, well known factories etc.) you often find yourself on a wild goose chase looking for where else something is being used and what else you need to initialize for this or that to work. Dependency injection makes that obvious¹.
-
you should always have an automated full-stack test
Which will fall down and break for no reason because today, Firefox didn't feel like clicking a button when it was asked to.
Anything that can be tested at the unit level should, and anything that can be tested at the integration level should.
-
Basically a DI framework is just a bloated configuration file parser with a bit of scripting that allows it to construct classes specified in the configuration with parameters specified also there.
Ehm... nope... you can write DI in Java using annotations.
all the complexity to the magic black box
You don't seem to understand Spring (or Guice for that matter) because you can define all your DI in a single .java or .xml file.
Also, using Spring as an example of DI is an overkill. I mean, Spring has DI, but there's so much bloat that using it for DI only is an overkill. Better to use something lighter like Guice or one of the many DI implementations out there.
DI frameworks appear to be popular in Java, perhaps because Java is bureaucratic and has buzzword-compliant community . Not so much elsewhere.
DI libraries are available in lots of OOPL™
Unit tests are overrated. Basically you should always have an automated full-stack test.
Those are called integration tests, and unit tests should be atomic in their scope. A unit test which depends on some database operation is Doing It Wrong™
Anyway, you probably need DI when you have something like this:
class A { B b; C c; D d; A() { this.b = new B(); this.c = new C(b.getSomething()); this.d = new D(c.getSomething(), b.doSomethingElse()); } }
In the future if you want to change B to be B1 or B() to be B(x) you would have to search in all files with
new B()
and change it. With DI it's all in the same place.
-
Those are called integration tests
Yes, I know.
and unit tests should be atomic in their scope
No. A test that is atomic in its scope is called a unit test. I know. When I say unit test, I mean that. Not all automated tests are unit tests. I said automated, not unit.
A unit test which depends on some database operation is Doing It Wrong™
Such test is not called a unit test. I am not calling it so.
Still, it is a useful test. And more important than all the unit tests in the world, because you may have perfect unit test coverage and still not find that your code falls in pieces the second it meets actual database.
-
I'm not sure if Eldelshell meant 'atomic', or 'orthogonal' here; in any case, the unit test suite should be atomic in the sense that integration tests should not proceed until all unit tests have been passed. I don't think anyone is arguing against integration testing, just that those are two separate steps in the full test stack, just as are functionality tests and acceptance tests.
-
A good solution I've seen to the "which wrench" problem is MEF's export metadata. You could have a single IWrench interface, and then multiple implementations tagged with their compatible socket type. You can then either import the implementation with that specific metadata, or import every IWrench and look through them yourself. MEF is a really good underrated framework IMO.
-
DI libraries are available in lots of OOPL™
Besides, there's the DIY approach to DI. I've used it before where I needed to ad-hoc inject a dependency in order to streamline how some piece of code worked; it's no big deal.
The key concept is the idea of a class (or even a function, DI's just a fancy name for a pattern that's older than
qsort()
) getting its dependencies from some entity outside the class' control, not that those dependencies are specified in a configuration file.
-
IOW, it's just higher-order function parameters (where 'object' == 'closure')? That seems a bit oversimplified. If you're right, then it's a great example of what Paul Graham meant when he said that design patterns were just a new name for what in Lisp programming are just idioms - vary basic idioms at that, in this case, I mean absolutely rock-bottom basic stuff any Lisp (or Haskell, or Ocaml, or Python, or Javascript) programmer does as easy as breathing. Wow.
Seriously, even in C, the only thing that makes this a little tricky is the declaration syntax for function pointers. I can see why formalizing it would make sense, but this seems to be excessively over-defined.
As for the initial issue... surely there is a way to define a Toolbox class that can be used to package the tools? That would at least get away from cluttering the method signature too badly, especially if a default Toolbox can be passed to the class (and/or the individual objects) at definition/instantiation. All you would need is a way to query the Toolbox for it's contents.
-
IOW, it's just higher-order function parameters (where 'object' == 'closure')? That seems a bit oversimplified. If you're right, then it's a great example of what Paul Graham meant when he said that design patterns were just a new name for what in Lisp programming are just idioms - vary basic idioms at that, in this case, I mean absolutely rock-bottom basic stuff any Lisp (or Haskell, or Ocaml, or Python, or Javascript) programmer does as easy as breathing. Wow.
Yep -- DI in the OO context is simply another name for higher-order object parameters, if you will ;) (i.e. this object accepts other objects it needs to do its job, instead of going out and grabbing instances itself -- just like a function with a higher-order parameter accepts another function it needs to do its job instead of trying to go out and find that function itself)
As for the initial issue... surely there is a way to define a Toolbox class that can be used to package the tools? That would at least get away from cluttering the method signature too badly, especially if a default Toolbox can be passed to the class (and/or the individual objects) at definition/instantiation. All you would need is a way to query the Toolbox for it's contents.
Yeah, a Toolbox would make sense in the OP's analogy -- it'd be the place to enforce any business rules on tool control, for instance, albeit with a wee bit of help from language mechanisms (RAII,
using
blocks, and friends).
-
i.e. this object accepts other objects it needs to do its job, instead of going out and grabbing instances itself
I thought that was Inversion of Control.
-
I thought that was Inversion of Control.
No, that's where the
libraryframework calls your code. (According to Wikipedia anyway.)IoC and DI are often used together.
-
Dependency Injection is a type of Inversion of Control.
-
Dependency Injection is a type of Inversion of Control.
No, they're simply often seen together -- doing some DIY ad-hoc DI should tell you that you don't need an IoC framework one whit just to inject a dependency somewhere.
-
Dependency Injection is a type of Inversion of Control.
The other way around: Inversion of Control is a type of (or rather: can be used as a means to provide) Dependency Injection.
-
Also, using Spring as an example of DI is an overkill. I mean, Spring has DI, but there's so much bloat that using it for DI only is an overkill. Better to use something lighter like Guice or one of the many DI implementations out there.
I like Spring. It does a lot, but I know what it does and I know how to use it to my advantage. It doesn't just do DI; it also does object manufacturing and lifecycle management, controlling when objects are created and destroyed and linking everything together correctly. Getting that stuff right is a major PITA in a complex application, so it's great that I can offload that whole problem to someone else's code. (Spring is laughably complicated internally, but it works well so I'm happy to not spend my time gazing at the pattern of sprocket oscillations in the engine bay.)
But not everyone has apps that are that complicated. It's easily overkill for simpler systems. Just don't assume that everyone's operating at that level.
-
@Yamikuronue said:
Traveller should be injected as well.
With a mechanic? Kinky stuff thread, etc...
Depends on whether or not
Traveller
implementsHeroinAddict
. That's some dependency injection!
-
@Maciejasjmj said:
@Yamikuronue said:
Traveller should be injected as well.
With a mechanic? Kinky stuff thread, etc...
Depends on whether or not
Traveller
implementsHeroinAddict
. That's some dependency injection!Don't you mean implements
ISucks
? In DI,Traveler
would just acceptHeroin
- thoughTraveler
would probably need to implementIAddict<Heroin>
in order for that to work.Edit: now that I think about it,
Traveler
should acceptIEnumerable<ITolerance>
- you'd never do dependency injection forHeroin
since the drug itself is just a visitor.
-
DI and IoC are two completely separate concepts that just happen to be implemented in the exact same way.
-
DI and IoC are two completely separate concepts that just happen to be implemented in the exact same way.
They're not "implemented in the exact same way" at all.
Inversion of Control means you hand away control over the program flow to an external code-unit and trust it to call into your code. Anything based on callback functions, e.g. , event handlers, teration/filtering/etc. on lists or sets based on lambdas, asynchronous control flow with promises/futures; all of that is IoC.
While IoC is about handing away control over the program flow, DI is about taking (or being given) control over an application's components or units of work, instead of relying on a code-unit to autonomously regulate its own.
You can have IoC without DI at all, simply by using evented programming, reactive programming, etc.
You can have DI without IoC by employing a service locator pattern or by injecting and constructing by hand.Or you can leverage IoC to automate and abstract away the DI aspects by handing responsibility for the complete object graph construction over to a third party framework. That's how the big frameworks like Spring, Unity, StructureMap, etc. do their thing.
-
You can have IoC without DI at all
This is the typical way Node works. Code is called from the event loop, but Require directly constructs dependencies.
-
This is the typical way Node works. Code is called from the event loop, but Require directly constructs dependencies.
Yes. CommonJS Require operates as a service locator pattern.
-
Inversion of Control means you hand away control over the program flow to an external code-unit and trust it to call into your code. Anything based on callback functions, e.g. , event handlers, teration/filtering/etc. on lists or sets based on lambdas, asynchronous control flow with promises/futures; all of that is IoC.
While IoC is about handing away control over the program flow, DI is about taking (or being given) control over an application's components or units of work, instead of relying on a code-unit to autonomously regulate its own.
And both are implemented by either having a method (usually constructor) accept some abstract object which is provided from the outside, or relying on some global object to provide them. It's just a matter of the purpose whether it's IoC or DI.
-
Whatever, it's all just wankery until you have a working program.
Wankery!
-
TIL @blakeyrat has design pattern fetish.
-
You can have DI without IoC by employing a service locator pattern or by injecting and constructing by hand.
The first of these isn't even DI. It's just the same as not using it at all, except with the exceptions moved to run-time.
-
-
I'd use the factory pattern
Dependency injection requires factories.
Because something has to gather resources, and it makes no sense for an object to gather its own resources, because then the method it uses becomes a dependency.
But then you need a factory factory.
Because you need a factory to provide the various factories.
This factory-factory could work off of a config, or a database, or MEF.
It simply takes dependency injection up through dll injection.
It could just simply be a static class with hard-coded resources...
-
A service locator works like this:
- You have a singleton dependency in every class.
- You have to get it somehow, either by actual dependency injection (And if you do that, what's the point?) or statically.
- You ask that singleton for something.
- If that singleton has one, your program doesn't crash.
- The calling code has no idea what types you need to have in the service locator for the call to not crash.
- If you did #999-2 statically, you don't even know that you need a service locator.
-
You can't inject the service locator, but you can inject the individual services - and that's what you care about.
INB4 - I think it is horrible design too. Any global state is a horrible design (if you don't go low-level, that is, because in low-level world, there's often no other way to do things).
-
Any global state is a horrible design
Singleton-by-configuration is OK; there's one thing because that's what the application needs, and the majority of code doesn't care. Singleton-by-hard-coded isn't OK; you've reinvented the global variable except in a way that is even harder to fix.
-
You would think that global state is the sort of thing everyone would agree on, but you'd be amazed at how often it shows up. Or maybe not, in light of the things we've seen before in TDWTF. There are very, very few places where global state is acceptable, and even in those cases it usually can be avoided.
OTOH, a lot of global state is in forms most people don't usually think of as program state, such as file systems or external network resources. Visibility of these is usually addressed as a security issue, but hardly anyone considers the problems that come from them simply being accessible at points where they shouldn't be. It is rarely a serious issue, so going to great lengths to address the issue is usually beyond the point of diminishing returns, but... well, It's still global state, even if I'm making too big a deal of it. IJBM.