game.getplayer() or actor PlayerRef Auto?

Post » Tue Nov 20, 2012 12:22 pm

Between those two: Game.GetPlayer() or declaring the actor ref property of the player?

For all the scripts, trying to find the method that would be less taxing on the scripts please?

-Mush-


edit : mistake on the topic, i meant, actor property PlayerRef Auto
when i was writing, i was thinking in terms of

actor player = game.getplayer()
but either doing that or having the auto property, either way.
User avatar
Facebook me
 
Posts: 3442
Joined: Wed Nov 08, 2006 8:05 am

Post » Tue Nov 20, 2012 10:53 am

See the http://www.gamesas.com/topic/1360171-playerref-gamegetplayer-or-do-it-in-properties/ on this topic. Edit: as others say below, there's no real reason not to use a property--it's what I always do in my own scripts.
User avatar
Sista Sila
 
Posts: 3381
Joined: Fri Mar 30, 2007 12:25 pm

Post » Tue Nov 20, 2012 7:26 am

Awesome, thanks :)
User avatar
Vicky Keeler
 
Posts: 3427
Joined: Wed Aug 23, 2006 3:03 am

Post » Tue Nov 20, 2012 2:25 pm

PlayerREF property is literally 1,000 times faster/cheaper, even if there's only a single reference to the Player. I'd go with the property every time or even 'Game.GetForm(0x14) As Actor', anything but GetPlayer as it's, comparatively, slow as cold molasses.

Spoiler
[09/02/2012 - 10:32:55PM] Code Comparer log opened (PC)[09/02/2012 - 10:32:55PM] Skyrim Version: 1.7.7.0[09/02/2012 - 10:32:55PM] SKSE Version: 1.051100[09/02/2012 - 10:33:08PM] Calibration Complete: 0.042110 for 10000 iterations. Time to complete: 5.254999[09/02/2012 - 10:33:09PM] === 'PlayerRef' ===[09/02/2012 - 10:33:09PM] Started 'PlayerRef' at: 36.411999 | Iterations to complete: 10000[09/02/2012 - 10:33:09PM] Finished 'PlayerRef' at: 36.410889 | Iterations completed: 10000[09/02/2012 - 10:33:09PM] Time elapsed (Raw) for 'PlayerRef': 0.041000[09/02/2012 - 10:33:09PM] Time elapsed (Calibrated) for 'PlayerRef': 0.000000[09/02/2012 - 10:33:09PM] Approximate time for each iteration (Raw): 0.000004[09/02/2012 - 10:33:09PM] Approximate time for each iteration (Calibrated): 0.000000[09/02/2012 - 10:34:53PM] === 'GetPlayer' ===[09/02/2012 - 10:34:53PM] Started 'GetPlayer' at: 37.485001 | Iterations to complete: 10000[09/02/2012 - 10:34:53PM] Finished 'GetPlayer' at: 140.839890 | Iterations completed: 10000[09/02/2012 - 10:34:53PM] Time elapsed (Raw) for 'GetPlayer': 103.397003[09/02/2012 - 10:34:53PM] Time elapsed (Calibrated) for 'GetPlayer': 103.354897[09/02/2012 - 10:34:53PM] Approximate time for each iteration (Raw): 0.010340[09/02/2012 - 10:34:53PM] Approximate time for each iteration (Calibrated): 0.010335[09/02/2012 - 10:35:15PM] Log closed

Given GetPlayer will always return the same ACHR, said ACHR is already necessarily persistent, and that resources are saved nigh invariably if using the property, I'd suggest using the property method, always.
User avatar
Taylor Tifany
 
Posts: 3555
Joined: Sun Jun 25, 2006 7:22 am

Post » Tue Nov 20, 2012 3:05 pm

I recommend the property method. Even if it's slightly (and even then, only ever so slightly) more complex to set up, the speed increase is worth it.

I've heard several excuses for not wanting to use this method, and I don't think any of them really hold water. I've heard that it "makes your code more difficult to read", which I don't agree with; maybe if you named your player property "ElephantObject" or something else meaningless, sure, it would be harder to read. But the default auto-resolvable PlayerRef is self-explanatory.

I've also heard it said that it "doesn't matter when you only call it once or seldomly, the increase is negligible." The thing about that is that your script doesn't work in a vacuum; it is sharing CPU with every other script in the game. We all need to strive to be good "papyrus citizens" and reduce CPU overhead whenever we can, even if the gain seems negligible. It all adds up.
User avatar
Anna Watts
 
Posts: 3476
Joined: Sat Jun 17, 2006 8:31 pm

Post » Tue Nov 20, 2012 3:27 pm

Use a property is you do have a performance problem, because GetPlayer() freezes your script for one frame.
Otherwise, most of the time, just use whatever is cleaner and easier for you. For me this means GetPlayer().

There is no "right" one :smile: GetPlayer() is slower, but a property takes up some amount of memory (even if empty). Is your script running too slowly or causing other scripts to run slowly? Then use the http://www.creationkit.com/StartStackProfiling_-_Debug http://www.creationkit.com/StartObjectProfiling_-_Form http://www.creationkit.com/StartScriptProfiling_-_Debug http://www.creationkit.com/DumpPapyrusStacks to measure your script. In most cases the problem is going to stem from a bad algorithm (like polling rather then using events) rather then using GetPlayer vs a property.

Or to more succintly summarize, there are the http://blogs.msdn.com/b/audiofool/archive/2007/06/14/the-rules-of-code-optimization.aspx:
  • Don't
  • Don't yet
  • Profile first
User avatar
Harry Hearing
 
Posts: 3366
Joined: Sun Jul 22, 2007 6:19 am

Post » Tue Nov 20, 2012 11:09 am

I've also heard it said that it "doesn't matter when you only call it once or seldomly, the increase is negligible." The thing about that is that your script doesn't work in a vacuum; it is sharing CPU with every other script in the game. We all need to strive to be good "papyrus citizens" and reduce CPU overhead whenever we can, even if the gain seems negligible. It all adds up.
But GetPlayer() does not really consume more CPU than a property, it's just that your script is paused until the next frame. Besides what you recommend is premature optimization and it is "the root of all evil".
The papyrus VM can process a few thousands of calls per frame. So, no, it really does not matter at all if your script add one call from time to time.
It's like arguing about coding everything in asm, a very bad practice that is not guaranteed at all to make your program efficient and reliable, quite the contrary.

First write a clean and elegant code and do not waste time on useless premature optimization. Then look for the result. Are there problems?
* No: congratulations, you have a clean code.
* Yes: then measure and profile, and optimize the bottlenecks you identified. You will only have to make 5% of your code dirty.
User avatar
Petr Jordy Zugar
 
Posts: 3497
Joined: Tue Jul 03, 2007 10:10 pm

Post » Tue Nov 20, 2012 3:32 pm

Since using PlayerRef (property) takes up some memory, does using multiple instances of PlayerRef use memory for each instance. Or does it not matter since the player is always persistent.
User avatar
Cathrin Hummel
 
Posts: 3399
Joined: Mon Apr 16, 2007 7:16 pm

Post » Tue Nov 20, 2012 2:18 pm

But GetPlayer() does not really consume more CPU than a property, it's just that your script is paused until the next frame.
The papyrus VM can process a few thousands of calls per frame.

Do you have any data to support that?

So, no, it really does not matter at all if your script add one call from time to time.
It's like arguing about coding everything in asm, a very bad practice that is not guaranteed at all to make your program efficient and reliable.

How is defining an easy-to-use property anywhere remotely like coding an application in straight ASM? Is adding one additional property definition at the top of your script going to completely obfuscate your code?

Again, if you have data to support what you're saying, I'm all ears, but I'm not going to argue about conjecture. The data I have in front of me from actual testing tells me that any Get* function is a lot, lot slower (relatively speaking).
User avatar
Robert Jr
 
Posts: 3447
Joined: Fri Nov 23, 2007 7:49 pm

Post » Tue Nov 20, 2012 11:44 am

Since using PlayerRef (property) takes up some memory, does using multiple instances of PlayerRef use memory for each instance. Or does it not matter since the player is always persistent.

It depends on what you mean by "instance". Calling / using PlayerRef multiple times in the same script will not take up more memory. Adding it as a property in multiple scripts will consume more memory each time it's defined. But the amount of memory we're talking about here is very negligible; I don't know this empirically (since I haven't seen Bethesda's source code) but all things point to properties being passed by reference instead of by value. They're pointers that point to the same Object Reference, in this case the Actor PlayerRef.
User avatar
Justin
 
Posts: 3409
Joined: Sun Sep 23, 2007 12:32 am

Post » Tue Nov 20, 2012 4:29 am

It depends on what you mean by "instance". Calling / using PlayerRef multiple times in the same script will not take up more memory. Adding it as a property in multiple scripts will consume more memory each time it's defined. But the amount of memory we're talking about here is very negligible; I don't know this empirically (since I haven't seen Bethesda's source code) but all things point to properties being passed by reference instead of by value. They're pointers that point to the same Object Reference, in this case the Actor PlayerRef.

A property is a "container" and the container itself takes up some amount of memory (not much, but some). The contents of an object reference property are shared between properties, so you only have to pay the cost for the object reference itself once (which is comparitively much more).

GetPlayer takes up some amount of processor time (not much, but some) most of it is simply causing the caller to wait (consuming no CPU time) until the function can be called. Meanwhile the VM and game can process something else. Calling GetPlayer repeatedly is more likely to slow down your own script more then someone else's, since your script will spend most of it's time yealding it's "slice" to other scripts. On the flip side, a ton of script threads all calling functions at the same time will slow eachother down as the VM can only handle X amount of calls every frame (X is not constant).

I think what is valid advice is not to repeatedly call a function over and over when the result will always be the same. If you can afford to take the performance hit of a single function call at the beginning of your process (and most scripts can) and store your result in a function level variable for reuse (and is therefore in stack memory which is "free") then that is probably a good thing to do.

It's also valid advice to not add a script to every single object in the game (or a very commonly-used base object) containing a property with identical contents, thus consuming a "lot" of script object (heap) memory.

Optimizing Papyrus is just like optimizing any other programming language. See above link and short list thoughtfully quoted by Perdev.
User avatar
Juan Cerda
 
Posts: 3426
Joined: Thu Jul 12, 2007 8:49 pm

Post » Tue Nov 20, 2012 2:46 pm

As always, thank you Viper.
User avatar
Lory Da Costa
 
Posts: 3463
Joined: Fri Dec 15, 2006 12:30 pm

Post » Tue Nov 20, 2012 5:43 am

Thank you again SmkViper. :)

Calling GetPlayer repeatedly is more likely to slow down your own script more then someone else's, since your script will spend most of it's time yealding it's "slice" to other scripts.
By the way I wanted to test whether papyrus scripts really have a maximum allocated time slice per frame and since you used this word in quotes it made me think about it again. I would very much appreciate if you could shed some light on this point. Is there really a hard time limit for every script (or rather just something that sems from the time spent waiting here and there) and, provided there is such a hard limit, is it actually per instance or per "script" (shared by all instances of this script)?
User avatar
Steph
 
Posts: 3469
Joined: Sun Nov 19, 2006 7:44 am

Post » Tue Nov 20, 2012 12:23 am

The VM behaves like a multi-threaded OS in that each running script thread gets a timeslice. The data the script thread is operating on is irrelevant (aside from the locking mechanism that prevents two threads from operating on the same data at the same time).
User avatar
lolli
 
Posts: 3485
Joined: Mon Jan 01, 2007 10:42 am

Post » Tue Nov 20, 2012 12:49 am

Ahhh, I see the OS anology explains it all. If any script runs for too long and prevent others to run, you put it on hold and let others get their share. So this "time slice" story wasn't totally an urban legend after all! :smile:
Many thanks again, SmkViper, those informations may soon prove to be very useful.

EDIT: I just realized that it also makes home-brewed benchmarks' results even more unreliable.
User avatar
Lady Shocka
 
Posts: 3452
Joined: Mon Aug 21, 2006 10:59 pm

Post » Tue Nov 20, 2012 1:23 pm

EDIT: I just realized that it also makes home-brewed benchmarks' results even more unreliable.
My understanding has been, that it may not be easy to build a script, that could be used to reliably judge the amount of CPU time used by a function, that is synched with the frame rate.

But the direct approach to measuring the processing time of a function in a tight loop does work relatively well for a small number of functions, many of them mathematical, that are executed without delay. However, because the function for getting the time value is synched to the frame rate, that does introduce an error at the beginning and end, and one has to use a fairly long test run to reduce the impact of that error.
User avatar
Taylrea Teodor
 
Posts: 3378
Joined: Sat Nov 18, 2006 12:20 am

Post » Tue Nov 20, 2012 8:45 am

Sorry, I believe I understand the rest of the discussion, but what does this mean? I imagine it could mean something like: "Assigned property values for a script attached to a base item are only stored once with the base item, though the script variables where they are used are stored separately for each instance of a placed reference", but it does leave me a bit uncertain.
You've got the gist of it. To phrase it more properly, object references are just references to the object, something similar to a pointer/address, while the object and all of its data are stored at a unique location in the memory.

My understanding has been, that it may not be easy to build a script, that could be used to reliably judge the amount of CPU time used by a function, that is synched with the frame rate, but the direct approach to measuring processing time of a function in a tight loop does work relatively well for a small number of functions, many of them mathematical, that are executed without delay.
Well, even for non-delayed and non-native functions you still have to deal with:
* The other functions currently running, you have no idea about them. I guess there is not that many but it is just a guess and I could be really wrong. You would need a lot of different scripts running concurrently to suppress that noise.
* The time slice allocated to your script if the VM is saturated. Fine if you only want to test the power available for a single script but not if you want to gauge the whole VM power since it is interesting in real-world scenarios to split your code across multiple scripts. See #1.
* The global time limit allocated to the whole papyrus VM. 1.5ms per frame by default, you need to take it into account and have a constant framerate. Easy enough, just has to be done. Well, if you rather want a number of calls per frame (more interesting imho), then you only need the constant framerate and can ignore the global time limit.
* The timer resolution, which seems to be a regular win32 timer with a 15ms resolution by looking at the profiling logs. Easy enough to solve, you just need to increase the amount of iterations to make the duration far greater than one frame. But it has not always been done.
* And finally you need a properly written test by someone who correctly understand Papyrus while the informations are not so easy to dig.

Many tests have problems imho, and I do not think that any single one tried to test the whole VM power but rather focused on a single script that was likely paused at some times and could not benefit from multithreading. I am really suspicious about the numbers I saw here and there and a 1 to 10 difference ratio between those ones and a proper parallel benchmark would not surprise me.
User avatar
CHANONE
 
Posts: 3377
Joined: Fri Mar 30, 2007 10:04 am

Post » Tue Nov 20, 2012 2:00 am

I am really suspicious about the numbers I saw here and there and a 1 to 10 difference ratio between those ones and a proper parallel benchmark would not surprise me.
I agree with your points, and I only trust my own tests, and even then only as relative indicators when compared to each other. Still, it is a pity, that one can't really get a good feel for how stressing the synched functions are. Personally, I would be curious about the random number functions in particular, as there might be interesting tradeoffs in, lets say, making three calls to get random x,y,z coordinates vs. making one call and using math to translate the result into three numbers.

P.S. And sorry for withdrawing my first question from under your feet. I felt fairly happy with my understanding of what was said, after all, and didn't wish to bother anybody with it.
User avatar
Megan Stabler
 
Posts: 3420
Joined: Mon Sep 18, 2006 2:03 pm

Post » Tue Nov 20, 2012 12:41 am

Besides what you recommend is premature optimization and it is "the root of all evil".

First write a clean and elegant code and do not waste time on useless premature optimization.
There's nothing "evil" about preemptive optimization (it becomes second nature), particularly in cases like this where legibility isn't negatively impacted. If you know you'll be referring to the player more than once in your script, why not pull out the stops and declare a property from the onset? Anything less is resigning to spend ten dollars repeatedly for something you can get for 10 cents and use for free ad infinitum. It's 'a stitch in time saves nine' sort of thing.

What is "clean" or "elegant" is subjective. What's efficient is not. Unnecessarily using a function over and over to obtain an object that never changes, to me, seems entirely inelegant and is incidentally drastically more expensive/time consumptive. I'd just assume save my alloted resources for when they're genuinely needed rather than squander them, so I have been avoiding and will continue to avoid using GetPlayer and in doing so will have fewer complaints from my mods' users about FeatureY taking too long to resolve. It's not like declaring a PlayerREF property is convoluted or makes a script harder to read or anything, in fact I'd argue PlayerREF is more legible, but that's probably just 'cause it's how I'm used to seeing things.
User avatar
Brittany Abner
 
Posts: 3401
Joined: Wed Oct 24, 2007 10:48 pm

Post » Tue Nov 20, 2012 2:48 pm

I never said it's bad to use a property. I said:
* That if you do it for optimization reasons when there is no need for optimization, you're doing it wrong and should focus on code quality.
* That if this optimization is driven by the belief that GamePlayer() consumes CPU power like crazy rather than the facts (it just delays your script by one frame), you're doing it even more wrong.
* That if you tell others they absolutely need and always have to use a property, you're even more wrong.
* That if you do it because you think a property is cleaner and more elegant than GetPlayer(), then fine. I do not share this opinion but it's mostly subjective.


Now, regarding the general topic of premature/preemptive optimization, there are times where preemptive optimization is fine. Indeed and for example, if you repeatdly do something expensive, it may be a good idea to cache the result right off the bat. I am not an ayatollah of code, quite the contrary I am very pragmatic and I sometimes had to design architectures with performances in mind from the ground up because of very challenging constrains. But most of the time "preemptive optimization" is just straight premature optimization and in my career I saw many horrors produced by young developers because of this behavior, horrors in terms of code quality, stability and, ironically, performances. The fact is that the consensus in the software industry is that you should first and foremost write a clean code, and the industry is damn right on that point. And the fact you claim that preemptive optimization is now a second nature for you makes me think you're wasting your time and your code and that your code is unlikely to be efficient anyway. I understand this opinion can be offensive and I apologize for this, I am just voicing my opinion honestly.


Finally, I think it would be interesting to (re-)read a few things:
* http://en.wikipedia.org/wiki/KISS_principle.
* http://c2.com/xp/YouArentGonnaNeedIt.html.
Even if you're totally, totally, totally sure that you'll need a feature later on, don't implement it now. Usually, it'll turn out either:
* You don't need it after all
* What you actually need is quite different from what you foresaw needing earlier.

This doesn't mean you should avoid building flexibility into your code. It means you shouldn't overengineer something based on what you think you might need later on. You save time, because you avoid writing code that you turn out not to need. Your code is better, because you avoid polluting it with 'guesses' that turn out to be more or less wrong but stick around anyway.
(replace "feature" with "optimization").

* http://c2.com/cgi/wiki?StructuredProgrammingWithGoToStatements.
This study focuses largely on two issues:
* Improved syntax for iterations and error exits, making it possible to write a larger class of programs clearly and efficiently without goto statements;
* A methodology of program design, beginning with readable and correct, but possibly inefficient programs that are systematically transformed if necessary into efficient and correct, but possibly less readable code.

Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%.

A good programmer will not be lulled into complacency by such reasoning, he will be wise to look carefully at the critical code; but only after that code has been identified. It is often a mistake to make a priori judgments about what parts of a program are really critical, since the universal experience of programmers who have been using measurement tools has been that their intuitive guesses fail.
Yeah, goto statements. The article is old, "for" and "while" loops were still a new thing, the CPU power was very small and goto statements were sometimes a way to improve speed and sometimes just a bad habit. Knuth was advocating for the abolishing of goto statements as often as possible, claiming that code quality should come first and that optimization should almost always be reserved for the later stages.
User avatar
c.o.s.m.o
 
Posts: 3419
Joined: Sat Aug 12, 2006 9:21 am

Post » Tue Nov 20, 2012 10:25 am

And the fact you claim that preemptive optimization is now a second nature for you makes me think you're wasting your time and your code and that your code is unlikely to be efficient anyway. I understand this opinion can be offensive and I apologize for this, I am just voicing my opinion honestly.
Writing scripts with efficiency in mind, as a general habit, is much like putting the ox in front of the cart... :shrug: It takes less time to do it right/fast (assuming timing matters at all) the first time if keeping optimization in mind provided it's not at the expense of clarity. No time is wasted, nor are the results inefficient. If anything, optimized code, whether optimized preemptively or after conception, is easier to read as there's generally less of it.

As an example, when making a loop to do something for every index, one can go about it multiple ways. If timing is of the essence, it matters which route one takes.
  • Cart in front of the ox, so to speak:
    FormList Property kSomeFLST AutoFunction SomeFunction()	Int iIndex = 0	While iIndex < kSomeFLST.GetSize()		If Game.GetPlayer().GetItemCount(kSomeFLST.GetAt(iIndex))			Debug.Trace("Player has member: " + iIndex)		EndIf		iIndex += 1	EndWhileEndFunction
  • Ox, aptly, in front of the cart:
    Actor Property PlayerREF AutoForm[] Property kFormArray AutoFunction SomeFunction()	Int iIndex = kFormArray.Length	While iIndex > 0		iIndex -= 1		If PlayerREF.GetItemCount(kFormArray[iIndex])			Debug.Trace("Player has member: " + iIndex)		EndIf	EndWhileEndFunction
One segment is no more difficult to read than the other, nor is there necessarily any more time spent pumping it out, yet one loop will resolve itself substantially faster than the other for several reasons (most measurably how the Player is referred to), letting the script get onto whatever's next sooner and providing the end user with a better, snappier experience. If there are multiple ways to skin a cat and one is faster while rendering the same/better results, it becomes second nature to start skinning any given cat the efficient way from the onset. If remodeling, you start from the top and work your way down that you'll not have to worry so much about spilling paint on the brand new carpet or spraying ceiling paint on the new wallpaper, right? Starting off on the right foot can save one a lot of time and simultaneously provide better results when solving any problem. Snakes preferentially take their meals head first. After extrapolating from other experiences with Papyrus, I know the second version will be faster, so I'd be inclined to set up such a loop the second way right off the bat. That's what I mean by preemptively optimizing.
User avatar
Del Arte
 
Posts: 3543
Joined: Tue Aug 01, 2006 8:40 pm

Post » Tue Nov 20, 2012 12:24 am

One segment is no more difficult to read than the other, nor is there necessarily any more time spent pumping it out
No segment is more difficult to read but you use an array instead of a form list, which may be more tiresome to update in the CK and which you may repeatedly have to do if after a recompilation the CK is unable to read the script's container. So, yes, it is likely to be more time-consuming in the long run.

yet one loop will resolve itself substantially faster than the other for several reasons
Yes, it will complete faster but this does not mean it eats less CPU. At best it does by an infitiesimal amount but the fact that you now store one array per attached object may hurt the CPU in a greater extent because of http://igoro.com/archive/gallery-of-processor-cache-effects/

So, sure, your script terminates earlier. But does it even matter? Does your script need to complete after one frame rather than twenty frames? You seem to have never considered this point while it is the only relevant one.
Because when it comes to CPU, the potential benefit is ridiculously small and the potential loss is very likely to be greater (while probably also totally negigible unless it is ran very often).

And finally, while readability is not impacted in your example (actually it is for people not used to this less common form since they need to think about it), you said you systematically "optimize" and I bet the issue is rarely as nautral. How often every hour do you choose a less readable form because of "speed"?

So, yes, you did waste time, your code is less readable for people not used to this form and the benefit is likely to be non-existent and maybe negative.
User avatar
Darian Ennels
 
Posts: 3406
Joined: Mon Aug 20, 2007 2:00 pm

Post » Tue Nov 20, 2012 6:18 am

I'm not a programmer so feel free to ignore me, but I do use scripts in my mods.

Maybe what you're saying would be true for the business world where time is money and the guy has to sacrifice "quality" for the sake of time.

But at home, for making my mods, I adopt the "better to do it right, than quick" approach. So I don't care if I spend 30 minutes making a small function snappier and cleaner.

Maybe that's one way to look at it :shrug:
User avatar
OJY
 
Posts: 3462
Joined: Wed May 30, 2007 3:11 pm

Post » Tue Nov 20, 2012 1:25 pm

Maybe what you're saying would be true for the business world where time is money and the guy has to sacrifice "quality" for the sake of time.
No, it's the opposite. Yes it saves time but the end quality and performances are also higher.

By doing premature optimization:
* You give yourself a comforting feeling of speed, while at best you only gain marginal improvements that will never be noticeable.
* You harm your code, make it less maintenable, make it more polluted, harder to read, modify and optimize.
* You do not optimize. Optimization can only be done if you know where are the bottlenecks and you need to measure whether you really improved things. And it's more true than ever on an exotic and poorly documented platform such as papyrus where people are chaining wrong guesses. Also, optimization relies on a good choice of algorithms, architecture, and a robust understanding of the platform, far more than on little writing tricks.
* You waste time.

And, yes, time matters even more than on professional projects. A developer work 50 hours a week on average and can achieve sizeable things during this time, but can you afford enough time on your personal projects? You need to be fast if you want to achieve something.
User avatar
Chris Guerin
 
Posts: 3395
Joined: Thu May 10, 2007 2:44 pm

Post » Tue Nov 20, 2012 12:07 am

  • No segment is more difficult to read but you use an array instead of a form list, which may be more tiresome to update in the CK and which you may repeatedly have to do if after a recompilation the CK is unable to read the script's container. So, yes, it is likely to be more time-consuming in the long run.
    It's just as easy to update an array as it is a FormList. No time lost there...
  • Yes, it will complete faster but this does not mean it eats less CPU. At best it does by an infitiesimal amount but the fact that you now store one array per attached object may hurt the CPU in a greater extent because of http://igoro.com/archive/gallery-of-processor-cache-effects/

    So, sure, your script terminates earlier. But does it even matter? Does your script need to complete after one frame rather than twenty frames? You seem to have never considered this point while it is the only relevant one.
    That's the point, that it will execute faster and of course it matters, in most every context I can think of, how snappy a script or function does its thing. With an allotted amount of time per round, I want my script to get it there, take care of business round 1 (KA-POW), and make room for whatever's next without things happening partially, waiting, continuing, waiting again, visual anomaly, etc. Any author of a combat mod, I'm sure, would agree that responsive code is important. For all the times people have complained about Papyrus being slow, I can't see how the relevance of optimization/expedition isn't more apparent in that it can be very fast if the stops are pulled.
  • Because when it comes to CPU, the potential benefit is ridiculously small and the potential loss is very likely to be greater (while probably also totally negigible unless it is ran very often).
    Perhaps there's a tradeoff, sure, but it seems 'cost' effective given scripts' responsivity often matters a great deal.
  • And finally, while readability is not impacted in your example (actually it is for people not used to this less common form since they need to think about it)
    Others' ability to read it shouldn't be a primary concern. Either method is self-explanatory anyhow to any familiar with Papyrus. I don't see having to think about it as a bad thing. It ...tickles when the aha moment arrives and a concept/strategy sinks in if studying another's code which initially baffled me yet runs beautifully. I'm glad others' scripts have stretched my comprehension, personally.
  • you said you systematically "optimize" and I bet the issue is rarely as nautral.
    I never said I systematically optimize. It's more like strategizing, lining things up mentally before putting them to paper as it were.
  • How often every hour do you choose a less readable form because of "speed"?
    Never. Optimized code, to me, tends to be more legible as, again, there's generally less of it.
  • So, yes, you did waste time, your code is less readable for people not used to this form and the benefit is likely to be non-existent and maybe negative.
    Where, pray tell, has any time been wasted? None of that held water. All of the strategies implemented in the second subject for optimization/expedition, if building a function from the ground up, would be implemented from the onset and I'd not have written the code one way, then the other placing stops in my way only to later pull them out.
  • You need to be fast if you want to achieve something.
    Writing efficient/fast code does not necessarily take more time.
User avatar
Mandy Muir
 
Posts: 3307
Joined: Wed Jan 24, 2007 4:38 pm

Next

Return to V - Skyrim