How to return a list and a flag in java
-
But you don't have to ever type delete yourself to do it. You can instead use a smart pointer that will manage the memory for you, and automatically call delete for you when it goes out of scope (or lower the reference count in shared pointers). It has the exact same performance as manually managing the memory, but it's not error prone.
Yes, and how do you think smart pointers are implemented? And about performance, they are never better than alternative solutions, but they surely can be worse - and often are. But usually you don't care, because 99% of time the difference is negligible - however, at this point you might as well write in Java.Don't use an ancient potato of a computer that can't run a single game and Skype at the same time.
It was an example. It was valid in 2006 - decades after people invented variable timestep. If you want 2015 example, hm... How about playing a graphics-intensive game while your housemate watches movie on 4K TV connected to HDMI port of your PC. Or something. The point is, random performance spikes happen, and game should handle them well.
-
Yes, and how do you think smart pointers are implemented?
They are templates that call delete on the destructor, can't be copied, but can be moved to pass ownership. Which are the exact same operations you would have to do manually on a raw pointer. Only instead of typing it yourself and risk a memory leak or a double delete, it is all handled automatically.
And about performance, they are never better than alternative solutions,
Which is what "exact same performance as manually managing the memory" means, yes.but they surely can be worse - and often are.
I'd like you to describe a scenario where that could be so. Between move operations and RVO, I doubt a raw pointer can beat a unique_ptr in terms of performance.For an example of what this means, here's a toy example comparing creating raw and smart pointers, and returning them from a function to be used elsewhere: http://goo.gl/f6Mb3q
You'll note that the assembly output for both is nearly identical. In fairness, the function that returns a smart pointer is a bit longer when sitting by itself. But when actually called from another function, that goes away.
Feel free to come up with an example that does something useful and not obviously wrong where I'm wrong.
shared_ptr is more expensive to use, of course, but that's because it does more things. In particular, it provides a thread safe reference count. Of course that's going to be more expensive than a raw pointer. But if you add all the book-keeping code you'd need to know when to finally delete the pointer safely in the same context where a shared_ptr is appropriate, you'd have essentially duplicated by hand a tool the library already provides.
If you use a shared_ptr where a unique_ptr would suffice, you're just wasting performance. The answer to that is, use unique_ptr unless you know you need a shared_ptr.
-
It was an example. It was valid in 2006 - decades after people invented variable timestep. If you want 2015 example, hm... How about playing a graphics-intensive game while your housemate watches movie on 4K TV connected to HDMI port of your PC. Or something. The point is, random performance spikes happen, and game should handle them well.
We used to have Age of Empires LAN parties with severely underpowered machines, and that game engine uses fixed timesteps. 1 hour of in-game time would take 3 hours of real life time. While this was really annoying, the game's code wasn't at fault, we were at fault for being morons who used AMD i486 clones for network gaming in 2002.
-
we were at fault for being morons who used AMD i486 clones for network gaming in 2002.
...and I thought I was bad for trying to run WoW on an AMD K6 233Mhz with 256MB of ram in 2004. Which ran OK as long as you weren't in a capitol city.
-
Minecraft
Speaking of which, I've been waiting for this old video to somehow be relevant again and now the time has come:
-
These were really strange beasts. I don't remember the moniker, but these were some kind of factory-overclocked (~100 MHz) AMD CPU that was pin-compatible with the 486 and was supposed to upgrade you to Pentium-ish performance without actually getting a new system. Our high school had a whole computer lab of these which were decommissioned around 2001 - 2002, and naturally quite a few of them ended up in my bedroom
They worked well for Doom, and I was able to teach myself all the basics of Ethernet and TCP/IP.
-
I remember those. I didn't realize it was the same AMD we know and love today -- I thought it was a Tiger Computing Intel knockoff!
-
These were really strange beasts. I don't remember the moniker, but these were some kind of factory-overclocked (~100 MHz) AMD CPU that was pin-compatible with the 486 and was supposed to upgrade you to Pentium-ish performance without actually getting a new system. Our high school had a whole computer lab of these which were decommissioned around 2001 - 2002, and naturally quite a few of them ended up in my bedroom
They worked well for Doom, and I was able to teach myself all the basics of Ethernet and TCP/IP.
They were probably AMD 5x86 chips. Assuming they were AMD chips and not Cyrix.
-
They were probably AMD 5x86 chips.
That's the one!
Cyrix.
:barf: I had a Cyrix Pentium II clone once. Single worst PC I've ever used in my life. Double-whammy since it was an eMachines with integrated graphics and no AGP slot.
-
I remember those. I didn't realize it was the same AMD we know and love today -- I thought it was a Tiger Computing Intel knockoff!
AMD started out as the second supplier of x86 chips to IBM for its PC business. This is because IBM policy required multiple suppliers in case something happened to one of them.
This spawned a bunch of lawsuits later on when Intel decided they didn't want to share. This is also why Intel switched to the Pentium moniker in the mid-90s... because they could claim it was a separate product line and try to avoid having to share the technical details with AMD.
-
They are templates that call delete
My point exactly. If C++ didn't allow manual memory management, smart pointers wouldn't be possible (or they would have to be baked into the language itself instead of libraries).I'd like you to describe a scenario where that could be so. Between move operations and RVO, I doubt a raw pointer can beat a unique_ptr in terms of performance.
Two words: memory pool. <Yes, I know that by constructing from raw pointer and using custom deleter, you can make unique_ptr work with memory pool, but even then you'll have to call this deleter on every destruction of unique_ptr. Also, if going full-RAII, memory pool itself needs to be refcounted, which isn't always necessary from the algorithm's point of view.>here's a toy example comparing creating raw and smart pointers
Yes, because I totally fucking argue about shit I know nothing kurwa about. Stop treating me like an infant who doesn't know how make_shared() works.If you use a shared_ptr where a unique_ptr would suffice, you're just wasting performance.
And if you use unique_ptr where stack allocation would suffice, you're wasting performance just as much. And if you use stack allocation where constexpr would suffice, you're wasting performance even more. Now, when we're done with basic C++11 tutorials, can we continue our discussion?
-
And none of this matters because your program has a terrible UI.
-
We used to have Age of Empires LAN parties with severely underpowered machines, and that game engine uses fixed timesteps. 1 hour of in-game time would take 3 hours of real life time. While this was really annoying, the game's code wasn't at fault, we were at fault for being morons who used AMD i486 clones for network gaming in 2002.
The game's code was certainly at fault here - pure fixed timestep is still flawed approach. It assures the simulation will be stable, yes, but the game shouldn't slow down when FPS drops. They should have detect when physics and graphics divert by more than one timestep, and run physics twice (or thrice, or however many is needed) to keep it in sync. Unless it wasn't GPU but CPU which was the bottleneck - then, there's nothing they could do, regardless of how they implemented update timing.
-
And none of this matters because your program has a terrible UI.
Are you criticizing native languages, or C++ specifically? Because I'm unsure if I should throw Windows or MacOS in your face.
-
And none of this matters because your program has a terrible UI.
LOLWUT? The stuff in comctl32.dll doesn't become automagically 100x better or worse just because you changed the language you're calling it from.
-
Also. Because you once again changed avatar, I've almost taken you seriously.
-
Just trollin'.
-
I don't see any trolling around here. Just sincere stupidity.
-
If C++ didn't allow manual memory management, smart pointers wouldn't be possible
I didn't say it didn't allow it. I said manual memory management was "Doing It Wrong". It being allowed is a prerequisite for doing it wrong. You said that it often results in worse performance. But for your example, you provide:Two words: memory pool.
Memory pool what? How does a smart pointer lower performance in this context?Now, when we're done with basic C++11 tutorials, can we continue our discussion?
Sure, as soon as you present an argument. My position has been that in the general case, you shouldn't use manual memory management. You should delegate the resource management to a smart object, and manage the lifetime of the object instead, trusting the object to clean up resources.The benefit of that is avoiding entire classes of bugs. Leaks and double deletes become impossible, and the constraints the objects force on your design place lifetime front and center and help produce better structured code. All this, at no performance penalty. To back that claim up, I showed toy examples.
You've simply said I'm wrong and not presented a reasoning to justify it.
-
This is also why Intel switched to the Pentium moniker in the mid-90s...
Also because model numbers weren't considered trademarkable at the time.
-
So, the answer, which has so far been entirely missed, is "don't". At least in this case. The original code returned a list, but it could fail, in what I am guessing were "exceptional" circumstances? is this the case? I don't see any boolean return in the previous code.
Adding C-style error handling in Java is not really the best way to use the exception system.
-
Adding C-style error handling in Java is not really the best way to use the exception system.
Java's exceptions work. Really. They're there to be used for failures (though as with any error handling, there's overhead with respect to ordinary returns).
-
the best way to use the exception system
public class ListAndFlagException extends Exception { private final List<Integer> list; private final boolean flag; public ListAndFlagException(List<Integer> list, boolean flag) { this.list = list; this.flag = flag; } //getters }
then you can just throw that from the method
Filed under: Which way's the great ideas thread?
-
I didn't say it didn't allow it.
Neither did I accuse you of saying it.I said manual memory management was "Doing It Wrong".
Is shared_ptr's author Doing It Wrong™ too?You said that it often results in worse performance.
Because many people overuse shared_ptr and then write Java in C++.Memory pool what? How does a smart pointer lower performance in this context?
I meant that default smart pointers offer worse performance than memory pool, and you can't implement memory pool without raw pointers. <See hidden text of my previous reply for what I have to say about custom deleters.>My position has been that in the general case
In general case you couldn't care less about performance of pointer, and you'll probably be better off writing in Python or C# rather than C++. We're talking about squeezing every drop of performance, or at least as much as possible before the codebase becomes unmaintainable mess.The benefit of that is avoiding entire classes of bugs. Leaks and double deletes become impossible, and the constraints the objects force on your design place lifetime front and center and help produce better structured code.
Check out Rust. It eliminates even more classes of bugs, also with no performance penalty. No, seriously. I actually recommend that language. Been developing in it for a few months now, and I can assure you it's better than C++ in many ways.All this, at no performance penalty.
Calling pointer's destructor is performance penalty. In contexts where you don't have 1-1 relation between pointers and allocations, it's non-zero penalty.You've simply said I'm wrong and not presented a reasoning to justify it.
And you keep thinking I don't use smart pointers. The reality is, I use them all the time, and they're real blessing in typical code. It's just, they don't solve all problems (cycles ahoy), and they aren't absolutely free (especially in more complicated cases, the simplest of those complicated cases being memory pools).
-
I don't think that any of those ensure that structs are never anywhere but the stack.
Something that can't be anywhere but the stack is totally useless.
But they do guarantee that structs are allocated in their owning context, that is they are not getting separate allocation unless you explicitly ask for it.
Check out Rust. It eliminates even more classes of bugs, also with no performance penalty. No, seriously. I actually recommend that language. Been developing in it for a few months now, and I can assure you it's better than C++ in many ways.
QFT
I meant that default smart pointers offer worse performance than memory pool, and you can't implement memory pool without raw pointers.
However, smart pointers can be used with memory pool given suitable
Deleter
or overloads ofoperator new
andoperator delete
.
-
Is shared_ptr's author Doing It Wrong™ too?
Obviously, if something doesn't exist, you can't use it. So using raw pointers to implement something that handles raw pointers is fine. unique_ptr's author couldn't have used unique_ptr to implement unique_ptr. So he gets a pass. Same for shared_ptr.In the general case, if you need to implement something, it's ok not to use that thing to implement itself unless you have created a closed time loop and given the complete thing to your past self.
Because many people overuse shared_ptr and then write Java in C++.
So people shouldn't use features that other people misuse? Even in the case of people abusing shared_ptr, at least they're not writing memory leaks all over the place. Bad programmers wouldn't have "learned to do it right", or they'd have learned not to abuse shared_ptr too. The choice is between them writing fast, leaky code, or slow, non-leaky code. They're going to write shitty code either way. You might argue that code that runs out of memory is better than code that is stable but uniformly slow, but that's just a subjective valuation.I meant that default smart pointers offer worse performance than memory pool
Well, yes. Of course general solutions offer worse performance than customized solutions. Allocating and freeing raw pointers with the default new and delete would also be worse than a memory pool that fits your specific usage.Check out Rust.
I have. I like many of the things it inherits from C++, that apparently no one else wants to (destructors, for example), and the things they change (const by default, etc). Waiting for it to get more mature. I hear they recently released the first stable version? That the syntax and library are set?It's just, they don't solve all problems (cycles ahoy), and they aren't absolutely free (especially in more complicated cases, the simplest of those complicated cases being memory pools).
Anyone that is writing anything complex enough to run into the limitations of smart pointers probably doesn't need to listen to random people on the internet. Doing It WrongTM is tongue in cheek. Exceptions do apply. If you know you are working on something that doesn't benefit from them, and you have the data to back it up, you have my permission to not use them.
-
However, smart pointers can be used with memory pool given suitable Deleter or overloads of operator new and operator delete.
Let me quote the hidden text I put in one of posts above...Two words: memory pool. Yes, I know that by constructing from raw pointer and using custom deleter, you can make unique_ptr work with memory pool, but even then you'll have to call this deleter on every destruction of unique_ptr. Also, if going full-RAII, memory pool itself needs to be refcounted, which isn't always necessary from the algorithm's point of view.
-
Hm, I've read the posts, but somehow overlooked this.
-
Obviously, if something doesn't exist, you can't use it. So using raw pointers to implement something that handles raw pointers is fine. unique_ptr's author couldn't have used unique_ptr to implement unique_ptr. So he gets a pass. Same for shared_ptr.
And where's the line between "exists" and "doesn't exist"? Before C++11, was unique_ptr considered existent due to being present in Boost? What about hashmaps, or GUI libraries? Is making my own GUI Doing It Wrong™ since Qt has been first released? Was Microsoft Doing It Wrong™ when they made DirectX™ while OpenGL was commonly available? Hell, why limit to C++ - I'd argue Boost authors were Doing It Wrong™ because Java makers have already created good garbage collector back in 1995!Where's the limit?
So people shouldn't use features that other people misuse?
They should. Read my post in whole, please.Well, yes. Of course general solutions offer worse performance than customized solutions. Allocating and freeing raw pointers with the default new and delete would also be worse than a memory pool that fits your specific usage.
Yes, but your original argument was that usingnew
anddelete
at all was Doing It Wrong™.I have. I like many of the things it inherits from C++, that apparently no one else wants to (destructors, for example), and the things they change (const by default, etc). Waiting for it to get more mature. I hear they recently released the first stable version? That the syntax and library are set?
Well, not really. I'd say language is 99% complete, and stdlib is at 90%. But it's feature complete. Currently they're in beta, but in a moon's time there will be final 1.0 version.
-
-
Exceptions do apply. If you know you are working on something that doesn't benefit from them, and you have the data to back it up, you have my permission to not use them.
One of my favourites for the “exceptions apply” was in some code I wrote a bunch of years ago to take a longish string and split it into a list of immutable strings each of length 1 character. (There were good reasons for not making them all be a plain
char
. They don't matter for this story.) The simplistic code of just making a list and then going through the input string one character at a time, and then adding a new string onto the end for each — the simplest algorithm you could choose — was slow because of all the memory allocation required; for a string of a few megabytes, you end up with rather a lot of memory activity.So I tried using a hash table (a simple array is less suitable because Unicode) that I'd populate as I went along with raw pointers to the strings I'd already allocated, and only allocate a new string for a character if I'd not seen it before. There was no need for holding a reference in the hash table — the list I was building held everything I needed there just fine — but a pointer was just dandy. When the list construction was done, I could throw away the hash table simply enough and it wouldn't need to do much work to handle dropping references it held as it didn't actually have any.
The updated algorithm was faster. Indeed, it was quite a bit faster on ordinary English text of around 15 characters, and by the time you hit a few megabytes it was orders of magnitude faster. The dropping of memory allocation activity from O(length of string) to O(number of unique characters in string) beat out all the costs of more complex processing. I guess we could have done it without raw pointers but they were the perfect tool.
Clients of the function (which is still deployed) that does this don't see any of the complexity. They just get the speed benefits.
-
but even then you'll have to call this deleter on every destruction of unique_ptr.
While yes, that's true, there should be a pretty close relationship between allocations and destructions. Nearly 1:1, unless you are doing something weird and switching ownership around all the time. And if you are, how likely are your programmers to get it right every time, and not get confused about whether this or that raw pointer is an owning pointer or not?
Before C++11, was unique_ptr considered existent due to being present in Boost?
If your company has mandated that you should use boost, yes. If you are in a sane environment, no.Where's the limit?
"Do you have access to it, and does it do the thing you want to do?" If you are going to reimplement something you don't have access to, even though it exists, that's fine. If you are going to reimplement something that exists but doesn't meet your needs, that's fine. If you're going to reimplement something that exists, and does the exact same thing you want to do, you're probably wasting your time.Yes, but your original argument was that using new and delete at all was Doing It Wrong™.
Ok, I'll amend it to be, "If you don't know for certain why you need them, using them is Doing It Wrong." I did not mean to imply that people developing stuff that handles the memory itself shouldn't be able to manipulate it directly.but in a moon's time there will be final 1.0 version.
Cool, I'll have to take another look at the project page.
-
While yes, that's true, there should be a pretty close relationship between allocations and destructions. Nearly 1:1, unless you are doing something weird and switching ownership around all the time.
PODs anyone?Ok, I'll amend it to be, "If you don't know for certain why you need them, using them is Doing It Wrong."
-
a lower framerate could slow down the whole game
The classic example is Space Invaders.. The enemies get faster as they're thinned out for precisely this reason
-
The classic example is Space Invaders.. The enemies get faster as they're thinned out for precisely this reason
Wait, really? I never knew that! That's pretty neat.
http://www.imaginivity.co.uk/wp-content/uploads/2015/01/bob.jpg
-
nd have him talk about how Minecraft went from passing int dim, float x, float y, float z as parameters to passing immutable Location objects with exactly those four members instead, and how its memory churn skyrocketed to nearly a meg a frame because of that?
I wondered how long it would be before someone brought that up.
http://what.thedailywtf.com/t/how-to-return-a-list-and-a-flag-in-java/48115/44?u=riking
-
Y'know, I did catch that reference at the time. But I guess it was too subtle for good recall.
-
package wat
import "testing" func Args(dim int, x, y, z float64) { _, _, _, _ = dim, x, y, z } type Location struct { dim int x, y, z float64 } func Struct(l Location) { _, _, _, _ = l.dim, l.x, l.y, l.z } func BenchmarkCallArgs(b *testing.B) { for i := 0; i < b.N; i++ { Args(0, 1, 2, 3) } } func BenchmarkCallStruct(b *testing.B) { for i := 0; i < b.N; i++ { Struct(Location{0, 1, 2, 3}) } } ben@australium:~/go/src$ go test -bench . minecraft_test.go testing: warning: no tests to run PASS BenchmarkCallArgs 2000000000 1.37 ns/op BenchmarkCallStruct 2000000000 1.37 ns/op ok command-line-arguments 5.761s
If the extra zero nanoseconds is a problem, Go might not be the best language for you. But if doing that causes you to lose more than a few nanoseconds, you might want to reconsider your choice of language.
-
repeat the test with different parameter values? (to test if the compiler is optimizing the allocation to a static location struct.)
-
How about I just disassemble it:
0000000000475490 <command-line-arguments.BenchmarkCallArgs>: 475490: 48 8b 54 24 08 mov 0x8(%rsp),%rdx 475495: 31 c0 xor %eax,%eax 475497: 48 8b 5a 70 mov 0x70(%rdx),%rbx 47549b: 48 39 c3 cmp %rax,%rbx 47549e: 7e 26 jle 4754c6 <command-line-arguments.BenchmarkCallArgs+0x36> 4754a0: 31 c9 xor %ecx,%ecx 4754a2: f2 0f 10 1d fe e2 15 movsd 0x15e2fe(%rip),%xmm3 # 5d37a8 <$f64.3ff0000000000000> 4754a9: 00 4754aa: f2 0f 10 15 fe e2 15 movsd 0x15e2fe(%rip),%xmm2 # 5d37b0 <$f64.4000000000000000> 4754b1: 00 4754b2: f2 0f 10 05 fe e2 15 movsd 0x15e2fe(%rip),%xmm0 # 5d37b8 <$f64.4008000000000000> 4754b9: 00 4754ba: 48 ff c0 inc %rax 4754bd: 48 8b 5a 70 mov 0x70(%rdx),%rbx 4754c1: 48 39 c3 cmp %rax,%rbx 4754c4: 7f da jg 4754a0 <command-line-arguments.BenchmarkCallArgs+0x10> 4754c6: c3 retq ...
00000000004754d0 <command-line-arguments.BenchmarkCallStruct>: 4754d0: 48 8b 54 24 08 mov 0x8(%rsp),%rdx 4754d5: 31 c0 xor %eax,%eax 4754d7: 48 8b 5a 70 mov 0x70(%rdx),%rbx 4754db: 48 39 c3 cmp %rax,%rbx 4754de: 7e 2b jle 47550b <command-line-arguments.BenchmarkCallStruct+0x3b> 4754e0: 31 db xor %ebx,%ebx 4754e2: 0f 57 c0 xorps %xmm0,%xmm0 4754e5: 31 c9 xor %ecx,%ecx 4754e7: f2 0f 10 1d b9 e2 15 movsd 0x15e2b9(%rip),%xmm3 # 5d37a8 <$f64.3ff0000000000000> 4754ee: 00 4754ef: f2 0f 10 15 b9 e2 15 movsd 0x15e2b9(%rip),%xmm2 # 5d37b0 <$f64.4000000000000000> 4754f6: 00 4754f7: f2 0f 10 0d b9 e2 15 movsd 0x15e2b9(%rip),%xmm1 # 5d37b8 <$f64.4008000000000000> 4754fe: 00 4754ff: 48 ff c0 inc %rax 475502: 48 8b 5a 70 mov 0x70(%rdx),%rbx 475506: 48 39 c3 cmp %rax,%rbx 475509: 7f d5 jg 4754e0 <command-line-arguments.BenchmarkCallStruct+0x10> 47550b: c3 retq ...
(both functions were inlined)
-
ok, but you still have static values for the arguments and structs.
change the values every iteration and see if that affects things?
seriously, we want to force aberant behavior if possible so if it still behave correctly if the values are different every iteration then you can brag about your language of choice!
(i'd try in my favorite language, but as we all know if you want performance then javascript is not for you)
-
The memory layout for this:
var Foo struct { A int B float64 C float64 D float64 }
is identical to the memory layout for this:
var ( A int B float64 C float64 D float64 )
-
yes, but if the calues are const the compiler may omit the memory allocations because the values are immutable
i want to remove the ability of the compiler to perform that optimization!
-
The compiler puts the memory on the stack because it doesn't need to put it on the heap. If it needed to put it on the heap, it could also put the individual ints/floats on the heap.
-
The compiler puts the memory on the stack because it doesn't need to put it on the heap. If it needed to put it on the heap, it could also put the individual ints/floats on the heap.
Not sure if you're dodging for a reason or not.
i"m asking you to run this test:
package wat import "testing" func Args(dim int, x, y, z float64) { _, _, _, _ = dim, x, y, z } type Location struct { dim int x, y, z float64 } func Struct(l Location) { _, _, _, _ = l.dim, l.x, l.y, l.z } func BenchmarkCallArgs(b *testing.B) { for i := 0; i < b.N; i++ { Args(i+0, i+1, i+2, i+3) } } func BenchmarkCallStruct(b *testing.B) { for i := 0; i < b.N; i++ { Struct(Location{i+0, i+1, i+2, i+3}) } }
or failing that tell me how to run it or play.golang.org. I'm obviously missing something because when i tried your original code i got:
Error while loading "/tmp/sandbox012051497/a.out": Cannot open NaCl module file Using the wrong type of nexe (nacl-x86-32 on an x86-64 or vice versa) or a corrupt nexe file may be responsible for this error.
-
Shocking revelation: Go is not Java. YOU HAVE DEMONSTRATED A VERY IMPORTANT POINT BEN L!
Before reading that post, I was sure that Go was, in fact, Java.
-
ben@australium:~/go/src$ cat minecraft_test.go
package watimport "testing" func Args(dim int, x, y, z float64) { _, _, _, _ = dim, x, y, z } type Location struct { dim int x, y, z float64 } func Struct(l Location) { _, _, _, _ = l.dim, l.x, l.y, l.z } func BenchmarkCallArgs(b *testing.B) { for i := 0; i < b.N; i++ { Args(i+0, float64(i+1), float64(i+2), float64(i+3)) } } func BenchmarkCallStruct(b *testing.B) { for i := 0; i < b.N; i++ { Struct(Location{i + 0, float64(i + 1), float64(i + 2), float64(i + 3)}) } } ben@australium:~/go/src$ go test -bench . minecraft_test.go testing: warning: no tests to run PASS BenchmarkCallArgs 500000000 3.08 ns/op BenchmarkCallStruct 500000000 3.08 ns/op ok command-line-arguments 3.710s
-
-shrug-
fair enough then.
still not enough to get me to touch go. :-P
-
The other option is to just create a quick-and-dirty "FunctionNameReturn" struct that contains the two things you want to return.
Which is, essentially, what the Tuple would be.
-