Suggestions needed for purging data in SDR

Post » Wed Sep 14, 2016 3:57 pm

Hello all. This is primarily directed at mod authors who have complex mods that need to regularly purge data in order to reduce savegame bloat, but anyone is free to comment.

My situation:
SDR uses its own method of creating, tracking and accessing/modifying customized actor values called "CAVs". It relies on a tiered array system:

Top tier: 1 master array, with an array entry for each active mod, + 1 for dynamic references
2nd tier: Each mod array has an array of all active actors which have their own array entry.
3rd tier: Each actor array has an array of custom actor values which could be numbers, strings, references, or arrays of other data related to the actor.

I have custom functions (either through script or the sdr.dll) that create, access, edit, and delete the data as needed.

I already have a purging script that focuses only on dynamic references to see if they are still in the game or not, and if not they get removed from the arrays.
I already have a script on game save load that checks the active mod list and if any mods are removed, those mod arrays, and thus all of that mod's actor reference's CAVs data, are removed from the master array.

Up until recently, I've been deleting an actor's array of CAVs upon the moment that they die and then remove the token that monitored and updated their CAVs.

However, I am currently experimenting with them continuing to retain their tokens even after death, 0 out appropriate CAVs at the moment of death, but continue to update a small few CAVs that are relevant to detection, and just have the token scripts exit early from the point in which they are determined to be dead.

Doing so resolves a few minor/potential issues that I've come across:
- It's necessary for other actors to be able to detect dead actors.
- Maintaining the CAVs, even if zeroed out, won't result in errors/inconsistencies when detecting dead actors vs. live actors.
- It helps prevent debugging problems in which SDR's detection system will result in Oblivion's default detection result if the token is missing.
- There is also the (rare) possibility that an actor could be resurrected. In which case the token kicks in and the CAVs are updated without having to reinitialize anything or add a new token.
- There is also the possibility that in the future I may want to track and update certain kinds of new CAVs data specific to dead actors (although nothing comes to mind at the moment)

Here's the conundrum:
My concern is that eventually a dead actor will be removed from the game. At that point, the CAVs data is definitely no longer needed, and would just end up being bloat in my opinion.
I can test for this using "IsRefDeleted RefID" and "IsFormValid RefID", which covers persistent and nonpersistent records and is what I use when checking the list of dynamic references.

It seems to me the best approach is to create a purging script that is similar to my script that goes through all the dynamic references, finds any that are marked to be deleted or invalid and then removes those CAVs from the master array.

My concern is how much processing time it would take to roll through all of the actor records stored in CAVs. If someone has been playing for a long time with a heavy mod load, there could be thousands of active actor records to go through.

One thought I had was to create a "Dead Tracker" master array that is similar to the CAVs master array:
Tier one: master array, with a list of active mods
Tier two: mod array, with a list of dead references

Whenever someone dies, they are added to the "Dead Tracker". Then, whenever a "purge" check is made, SDR rolls through the list of dead actors, and if IsRefDeleted Ref == 1 or IsFormValid Ref == 0, it removes the CAVs data from the master and then removes the actor from the "Dead Tracker". I'm thinking that would be the most efficient method.

The next question is to determine how often to run a purge check. I run the dynamic reference purge check fairly regularly: once every five minutes of active game play and once every time the game is saved. Presumably, running a "dead purge" wouldn't take too long, and I could have it run right before the dynamic references check.
------------------
Feedback welcome!
User avatar
Marine Arrègle
 
Posts: 3423
Joined: Sat Mar 24, 2007 5:19 am

Post » Wed Sep 14, 2016 8:34 am

Just an idea that i use when I have to process a large number of items and do not want to do it all in a single frame.



1. Create a quest with an array and some code


2. Add your items to that array (in your case, the CAVs, I suppose )


3. Quest code in gamemod (even better in menumode)


I. Check the first item in the array (in your case, whether the CAV should be deleted) and


a. Process the item and remove it from the array


or


b. do not process the item, remove it from the beginning of the array and append it to the end for reprocessing later.



This way you can process one (or a few) items each frame, round robin, without creating a noticeable lag. fQuestDelayTime is also useful for tuning the frequency.



You might also use your mod array as the item, so you would check actors from one single mod each time the code runs.


User avatar
Alex Blacke
 
Posts: 3460
Joined: Sun Feb 18, 2007 10:46 pm

Post » Wed Sep 14, 2016 4:04 pm

I really like the idea of using the mod list as the "item" and that it will execute the code for only one mod per frame/chunk of time.

I also like the idea of it happening in menu mode, since it is not data that has to necessarily be dealt with in game mode.

I'm wondering though which "menu" modes are the best choices? I don't think it should be all of them since it needs to happen after a save game is loaded or new game is started. The obvious choice would be the menu mode in which the game is actually saved so that if it is flagged to run, it will happen at that point. If so, then I would probably move the "on save game" event handler script to that section instead. This would mean that if someone uses a quick save or a mod forces a save, it won't run a purge at that time (which is probably better anyway)

I also certainly don't want it running every time it goes into menu mode. I guess I could create a flag for each time it completes a run of all the mods. Then when the player switches back to game mode, the flag gets reset (perhaps on a timer).

The flag could also be reset whenever someone gets added to the death list.

So for instance:

MenuMode (save game menu only?):
- do stuff
if flagged to purge
> purge dead actors, one mod per frame until all mods are processed
> flag purge complete
> reset game mode purge timer (custom .ini game setting the user can define

GameMode:
> purge timer is updated
> when timer counts down to <= 0, flag to purge upon next menu mode

OnDeathEvent
> add to dead list
> flag to purge (needed?)
[also, I've noticed that summoned creatures don't seem to "die" per-se. At least not in any of my debugging tests. Not necessarily an issue since they are dynamic refs that will get cleaned up anyway]

Also, something I've never experimented with or seen a good answer to, but my guess is that the fQuestDelayTime timer gets reset to 0 whenever you switch from game mode to menu mode and vice versa. If that's the case, then I can change the fQuestDelayTime timer in the menu mode to .01 so that it runs every frame, and then reset it back to something like every 10 seconds when it returns to game mode. Does that sound plausible?
User avatar
Matthew Warren
 
Posts: 3463
Joined: Fri Oct 19, 2007 11:37 pm

Post » Wed Sep 14, 2016 11:46 am

+1 to QQuix suggestions. I actually do that regularly myself. In fact I don't even use the popular token approach at all. At this point in SDR development you may not feel like adopting this idea to cover all of your needs, but I think it's worth explaining what I usually do.


I make global scripts (quest scripts) that work as queues of actors/objects/whatever that need to be processed. Then I make UDFs to quickly register all the data that needs to be registered in those quests' arrays, and start the quests if they're not running. This way my code looks pretty clean, and I avoid using tokens altogether.


The main pros of this approach:


  • I can define how often to update the queues by modifying the fQuestDelayTime vars. Timers aren't typically needed.

  • It saves performance because those scripts don't need to run every frame.

  • It saves performance because there aren't many simultaneous instances of the same script running at the same time.

  • You avoid potential bugs related to the functions that add/remove inventory items, and GetContainer. You can manage it all with arrays.

  • You avoid potential savegame bloating caused by tokens creating strings or the like.

The cons:


  • You have to include some chunks of code that wouldn't be necessary with tokens, because tokens disappear when the carrying actor is deleted, but array entries don't.

  • The code can get a bit harder to write/read, specially when you need to poll every item in the queue at once in a single frame.




From my experience those timers don't reset to 0 when you go from GameMode to MenuMode and vice versa. I didn't test extensively, but that's what it looked like to me. Anyway, in your example, the problem is that the MenuMode side of the script would be delayed by (at most) 10 seconds every time you go into MenuMode.


You could make a secondary global script that runs every frame both in GameMode and MenuMode with the sole purpose of changing fQuestDelayTime vars where they need to be changed. Or something like that. Example:

?



scn TimerUpdater

float fQuestDelayTime

Begin GameMode
if Quest1.fQuestDelayTime == 10
return
endif

Let Quest1.fQuestDelayTime := 10
Let Quest2.fQuestDelayTime := 20
End


Begin MenuMode
if Quest1.fQuestDelayTime == .01
return
endif

Let Quest1.fQuestDelayTime := .01
Let Quest2.fQuestDelayTime := .1
End

?

User avatar
Scotties Hottie
 
Posts: 3406
Joined: Thu Jun 08, 2006 1:40 am

Post » Wed Sep 14, 2016 9:21 am

The idea of avoiding tokens is an interesting one and one I haven't thought of.

But I'm not sure how feasible it is without adding a lot more CAVs for tracking data.

some of the data that gets evaluated has to be compared to what previously happened at the last check. Examples include things like:
sneaking status
distance traveled
chameleon effects
detection levels
etc.

So although I *could* save that information as CAVs data, that info isn't really used anywhere else.

The other issue is the timing of the different sets of features. The Actor tokens have processes that are evaluated in chunks of time: .1 seconds, .5, 1, 2, 4, and 8 seconds. the only variables that get updated every frame are the timers and early exit points (such as if the actor is not currently high processing)

If I had a master quest script that evaluated all high processing actors in a 17,000 unit radius, and I updated all the same chunks at the same time, I think there could be some weird performance hiccups at those .1, .5, 1, 2, 4, and 8 second marks.

By assigning the tokens in spurts, the timing of adding those tokens and when those chunks of processing kick in will be staggered. Or at least, that was my theory when I put it together.

Still, it's an interesting idea. There might still be a way to stagger processing those chunks. And it would be nice to remove the token aspect of it.

I'll have to give it some thought.

The greatest challenge will be dealing with high population density areas if rotating through the list at one actor/frame. Because if the fps is 30 fps and there are over 30 actors, then some folks will get lost in the shuffle.
User avatar
Gisela Amaya
 
Posts: 3424
Joined: Tue Oct 23, 2007 4:29 pm

Post » Wed Sep 14, 2016 10:25 am

You can keep individual timers for every actor if you want, and still benefit from not using tokens at all. It does require a substantial amount of anolysis though, so it *might* not be worth the trouble, specially if you have to rework many scripts. I don't use CAVs so I typically do this in a matrix.


?


?In your case, it seems you would need 6 parallel arrays for the timers. You can update the timers for every actor to keep the same functionality, but for this you'd need to iterate through the entire matrix every time the script runs. The good part is that you'd only need to call GetSecondsPassed a single time, before starting the loop.



I don't know what is your current method for distributing the tokens, but you should be able to use mostly the same method to enqueue the actors, which would prevent the performance hiccups because the timers for those actors wouldn't run in sync.



Since I brought that up, I can give you an elaborated example later if you want.

User avatar
Kirsty Wood
 
Posts: 3461
Joined: Tue Aug 15, 2006 10:41 am

Post » Wed Sep 14, 2016 8:44 am

Since you got into timers, let me mention another approach i use in this scenario: pretty much the same as my previous example, but instead of using array arrays, i use map arrays where the key is a timestamp (float = gamedayspassed * 24 + game hour).



Since maps are kept in ascending order, every frame (in my case), I check the first entries in the map and process every item which timestamp is lower than 'now' (their timer run out). After processing, I decide when I want to review that item and add it back to the map with key = 'now' + timer (1 second? 1 minute? 1 hour?). (before adding you have to check if there is already an entry with that timestamp, in which case you keep adding .00001 to the timestamp until you find an 'empty spot').



This way you only handle very few entries and do not have to walk any large list of items. The every-frame overhead is very low because it is just a matter of comparing ar_first with 'now' and exit if there is no item to process.



If you have multiple actions with different intervals for the same item, the map value could be a two entry array with the item array as one entry and an action identifier as the other. (sort of a struct in c++)

User avatar
james tait
 
Posts: 3385
Joined: Fri Jun 22, 2007 6:26 pm

Post » Wed Sep 14, 2016 9:36 am

QQuix, if you're trying to make me look bad, you're making a great job! ;)


?


?It doesn't hurt to have a little imagination huh? :P


?


That's a very nice method. In fact, I might borrow it sometime. Thanks for sharing!

User avatar
candice keenan
 
Posts: 3510
Joined: Tue Dec 05, 2006 10:43 pm

Post » Wed Sep 14, 2016 10:18 pm

Well, I'm not exactly sure if I understand everything mentioned, but extrapolating from what you both stated, I could probably do something like this:

1. Have a single master "actor updater" quest script that runs every .01 seconds (maybe every .1, I'm on the fence on that*).
2. Captures get seconds past only once.
3. Retrieves a list of all active high processing actors within a proper range (17000 units ~ 4 cell depth).
3a. The list of high processing actors is updated only under a few circumstances that can be flagged: saved game load, cell change, summoning spell is cast, or timer (every four seconds)
4. Rolls through the list, "initializes" any new ones (assigning a core set of CAVs, and runs all timer functions once).
4a. Rather than use the same timer for all actors, each actor could have a CAV for each timer (.1, .5, 1.0, 2.0, 4.0, 8.0).
4b. It updates each timer entry in their CAV data with get seconds past.
4c. If a CAV timer meets its minimum (.1, .5, etc.) that CAV timer is reset and the appropriate function sub-script is called.
This should naturally stagger when the sub-scripts fire, that way, you wouldn't get the "2.0" second process firing in the same frame for everyone.

Note that I'm using the GetNextRef technique since GetHighActors doesn't seem to be reliable.

CAVs is just an array method for storing and retrieving variables outside of a token system. So I would definitely have to create more CAVs variables to track the extra data (timers, token variables, etc.), but having all actors updated by one master script and skipping the whole token thing sounds kind of nice, especially since I no longer plan on removing the tokens when an actor dies.

I wonder how significant a performance improvement it would be though. Even though I wouldn't be updating the list of high actors constantly, it seems like rolling through them every frame to update/check timers seems pretty intensive. But perhaps less intensive than having anywhere upwards of 30+ tokens running a script every frame.

You've given me much to ponder.

* The reason I am on the fence on the .1 vs. .01 is that I am concerned about all actors firing the .1 sub-scripts all at the same time in the same frame. By having the quest run every .01 seconds, that should hopefully spread things out a little bit, depending on when a new actor shows up to the party, so to speak.

sidenote: creating a CAV for the first time is somewhat time intensive, but still very fast. Accessing/editing a CAV after it has been created is super fast.
User avatar
Donatus Uwasomba
 
Posts: 3361
Joined: Sun May 27, 2007 7:22 pm

Post » Wed Sep 14, 2016 8:27 pm

?Interesting. I'll make sure to check CAVs out. Sounds nice.



Between CAVs and QQuix's method for this particular purpose I'm not sure which one I'd pick, though. Using CAVs would probably simplify a part of the process, but the timestamp concept doesn't even require GetSecondsPassed calls. GameHour is a float-type global, so it changes every frame. As Batman said *clears his throat* I mean... as QQuix said, you can make a map using the current game time + X seconds as the keys, and since you want to process more than one actor at a time, you can make arrays for the values, or even StringMaps. An example of how the structure could be:

?

arQueue -> the Map array using timestamps as keys, and StringMaps as values.

?arQueue[Timestamp] -> a StringMap of Arrays, with the name of the action as the key.

?arQueue[Timestamp]["ActionName"] -> an Array of actors for that action.

?arQueue[Timestamp]["ActionName"][X] -> an actor ref.

?

Then you would basically loop over the first element(s) of arQueue until you find a Timestamp that is >= the current game time. Every element in arQueue with a greater key still wouldn't need to be processed, so you can skip them, saving processing time.

?

?You could even combine that concept with a specific UDF for every kind of action, to take advantage of the possibility of calling return early on within the UDF, and shorten the quest script too.

?





I believe it was the OBSE devs who found out long ago that the script engine can actually lag more if there are many scripts running at the same time than when there are few simultaneous heavy scripts. I think I read it from scruggsywuggsy the ferret in one of the talk pages at the CS wiki long ago. I can't vouch for it because I didn't actually test it myself, but from my experience it seems to be quite true.

User avatar
Alan Whiston
 
Posts: 3358
Joined: Sun May 06, 2007 4:07 pm

Post » Wed Sep 14, 2016 8:36 pm

@snazzyCM
It's true that long scripts can be optimized more. The issue with using long scripts is that the script engine process all the script even the part it doesn't need to execute, unless if it encounter a return statement.
It can be avoided using the "fail fast" pattern. Control the coindition that are more likely to fail first and in the case apply a return statement.
User avatar
Kirsty Collins
 
Posts: 3441
Joined: Tue Sep 19, 2006 11:54 pm

Post » Wed Sep 14, 2016 12:20 pm

Well, there you have it. The confirmation from one of the current OBSE devs. Thanks Ilde, it makes my efforts worthwhile :)

User avatar
Rodney C
 
Posts: 3520
Joined: Sat Aug 18, 2007 12:54 am

Post » Wed Sep 14, 2016 11:30 am

@snazzyCM This is empirical experience. No formally proved data.
Do not ever appeal to authority. It's a logical fallacy.
Properly optimized long script (fail fast pattern) can tax less on the engine then many shorter scripts.
This is particularly true if the smalls script depend on (or try to access )some centralized data.
Also a centralized big script is easier to optimize becouse you have a bigger vision on the all.

However big scripts that don't use correctly the return statement or doesn't use it at all, may have a completly opposite effect.
Also if the script is much big ( script compiled size > 6KB more or less) it can cause also some instability on the engine, that can be feeled much more if you already have script heavy mods.
I made this mistake once.

A compromise may be the one to use a centralized script, but separating some pieces of code in some User Defined Functions.
User avatar
Frank Firefly
 
Posts: 3429
Joined: Sun Aug 19, 2007 9:34 am

Post » Wed Sep 14, 2016 11:31 am


Thanks for the advice. I just try to be humble, you know. :wink_smile:?




The former is not something I'd ever do, but I wasn't aware of the latter. It's good to know.




Which is precisely what I suggested above, and what I usually do in my own scripts.


Thanks again! :)

User avatar
meg knight
 
Posts: 3463
Joined: Wed Nov 29, 2006 4:20 am

Post » Wed Sep 14, 2016 10:47 am

That's pretty much what I do. I use returns whenever practical (thus the .1, .5, 1, 2.0, 4.0, and 8.0 second time frames), and then UDFs to break out larger chunks that don't need regular processing to keep the main script shorter.

I *think* I understand QQuix's approach. But I'm having some trouble wrapping my head around it, Maybe because it's not obvious how to apply it to my situation.

Essentially, the way I'm looking at my problem, it would be thus:
1. Create queue of high processing actors
2. For each actor, update the current timer for each action segment (.1, .5, etc.)
3. If the new timer >= to the action segment requisite, then call the sub-function for that actor and reset that timer.

My array structure of stored data would be as follows:
Master Active Mod List (Array) -> Actor List per mod (String Map) -> CAV list per actor (Map) -> CAV data (String Map)

The timers would be part of the CAV data entries at end. I already have custom sdr.dll functions for accessing the data, for example:
ActorRef.sdrCavNumGet "iCavDataStringID" // returns the number stored
ActorRef.sdrCavNumMod "iCavDataStringID" SomeNum // modifies the number stored with the passed value and returns the new total
ActorRef.sdrCavNumSet "iCavDataStringID" SomeNum // sets the number stored to be the passed value, returns the value (to confirm)

I also have a UDF for pulling active high processing actors which will be updated regularly elsewhere.

So as I roll through the active high processing actors in the master quest, it would process along these lines (I'm not at home, so please forgive syntax errors):

float GSP
array_var aActor
ref rActor
float fTempTimer

Begin GameMode

Let GSP := GetSecondsPassed

; stuff

While aActor <- sdrQ.HighProcessingActors
Let rActor := aActor["value"]

; checks to see if actor has been initialized first, if not, initializes and process all sub functions
if (rActor.sdrCavNumGet "iCavInitialized" != 1)
; ... insert various code here ...
; calls a UDF that creates the default CAVs and sets all timers to minimum needed to trigger on the next pass
continue ; skip timers for newly initialized, will be updated in the next pass/frame.
endif

; update all timers for the actor and check if they need processing

; .1 second update checks
Let fTempTimer := rActor.sdrCavNumMod "fCavTimer01" GSP
If fTempTimer >= .1
rActor.sdrCavNumSet "fCavTimer01" 0
rActor.Call sdrActorUpdate01
endif

; .5 second update checks
Let fTempTimer := rActor.sdrCavNumMod "fCavTimer05" GSP
If fTempTimer >= .5
rActor.sdrCavNumSet "fCavTimer05" 0
rActor.Call sdrActorUpdate05
endif

; 1.0 second update checks
Let fTempTimer := rActor.sdrCavNumMod "fCavTimer10" GSP
If fTempTimer >= 1.0
rActor.sdrCavNumSet "fCavTimer10" 0
rActor.Call sdrActorUpdate10
endif

; etc. etc.

Loop

End
User avatar
Assumptah George
 
Posts: 3373
Joined: Wed Sep 13, 2006 9:43 am

Post » Wed Sep 14, 2016 1:58 pm

I was thinking about something like this (no need for timer in the CAVs)



(I am also without Oblivion - still could not reinstall it)



This way, each actor will be in the queue several times, one for each interval.



This is a simple implementation. snazzyCM's is more elaborated, as it groups actors together.



array_var amQueue
ref rActor
ref rUDF
array_var ar
float now
float t
float sec
int doonce
float fTempTimer
float GSP
Begin GameMode
let now := (gamedayspassed * 24) + gamehour
if doonce == 0
let amQueue := ar_construct map
let sec := 1 / 60 / 60 ; 1 second in hour unit

While aActor <- sdrQ.HighProcessingActors
Let rActor := aActor["value"]
; Your initialization
;-------------------------------
; checks to see if actor has been initialized first, if not, initializes and process all sub functions
if (rActor.sdrCavNumGet "iCavInitialized" != 1)
; ... insert various code here ...
; calls a UDF that creates the default CAVs and sets all timers to minimum needed to trigger on the next pass
endif

; Add the 1 second processing
;-------------------------------
let ar := ar_construct array
let ar[0] := rActor
let ar[1] := sdrActorUpdate01
let t := now + sec

while ar_haskey amQueue t
let t += .000001 'not sure if floats can handle this
loop
let amQueue[t] := ar

; Add the 5 second processing
;-------------------------------
let ar := ar_construct array
let ar[0] := rActor
let ar[1] := sdrActorUpdate05
let t := now + 5 * sec

while ar_haskey amQueue t
let t += .00001
loop
let amQueue[t] := ar

; Add the 10 second processing
;-------------------------------
let ar := ar_construct array
let ar[0] := rActor
let ar[1] := sdrActorUpdate10
let t := now + 10 * sec

while ar_haskey amQueue t
let t += .00001
loop
let amQueue[t] := ar
endif


while (ar_first amQueue) < now
let ar := amQueue[ar_first amQueue]

let rActor := ar[0]
let rUDF := ar[1]

rActor.call rUDF
; each UDF must return the actor to amQueue at now+1, now++5 and now+10 respectively
; the UDF also has the choice of not returning the actor to the queue

ar_erase amQueue ( ar_first amQueue ) ;[EDIT]
loop
End
User avatar
Daniel Holgate
 
Posts: 3538
Joined: Tue May 29, 2007 1:02 am

Post » Wed Sep 14, 2016 8:59 am

In all honesty, I'm just not following it. I think it's too advanced a concept for me and I'm too entrenched in the system I created.

It seems to me that although the CAVs variables for the timers wouldn't be needed, per-se, you still end up having to create array/data slots for those timers to determine when the UDFs should execute. But instead of just having a single slot of data in an existing array for one actor that gets updated (CAVs), you are constantly creating and reassigning arrays every time you need to evaluate a timer.

Again, part of the problem here is that I don't fully understand what's going on.

Here is my interpretation of what is going on data structure wise:

CAVs system:
One array of CAV data is created once per actor in the initialization phase.
List of data includes six string map entries for the six timers.
The timer entries are updated with GSP, and if appropriate, UDF is called and timers reset.
No new arrays are created during the timer updates/UDF calls.

Global+Now system:
One array of CAV data is created once per actor in the initialization phase.
Timers are not included.
An array is created using global time + now as the map to establish a queue.
Each queue entry point has an array created that includes the actor name and a reference to which UDF should be executed.
The entry point is the "time slot" at which the actor/UDF gets executed.
After execution, arrays are deleted so that they aren't run a second time.

It *feels* like there is a lot of room for either duplication, error, possibly missing UDF calls.

Also, is constantly creating and deleting the "timer" arrays going to be more efficient than just updating timer data stored in an existing array that only gets created once?
User avatar
Louise
 
Posts: 3407
Joined: Wed Nov 01, 2006 1:06 pm

Post » Wed Sep 14, 2016 3:23 pm

With the queue, you only handle very few entries, if any. (think of it as a timer event list)


With the timers you have to walk all the list.



Option 1 executes fewer script lines, but the functions are heavier (as you pointed out, the queue approach needs array insertions and deletions)


Option 2 executes more script code, but lighter functions (GSP, additions, ifs)



I honestly can't say which has the best performance.



No room for errors, thou. The only risk is the float variable not handling the required resolution (1 hour / 60 min / 60 sec / 30-60 fps). In which case, in that example of yours with 30+ actors in a 30 fps game, the handling of actors would fall behind. This would be a good reason to group actors as snazzyCM suggested.

User avatar
Sheeva
 
Posts: 3353
Joined: Sat Nov 11, 2006 2:46 am

Post » Wed Sep 14, 2016 9:37 am

A small clarification there - Scripts don't run concurrently in Oblivion except when references are being loaded into memory. The loading process happens in a background thread, and once a reference is loaded, its scripts are executed once in the background thread's context. In all other cases, scripts are executed in a serial/consecutive fashion..

User avatar
Lucy
 
Posts: 3362
Joined: Sun Sep 10, 2006 4:55 am

Post » Wed Sep 14, 2016 7:54 pm

I think I'm getting the hang of it now.

Perhaps a separate queue for each UDF block type. That way you could add multiple actors that would have that UDF execute within the same timeframe/segment.

It could be a scenario where each actor is added to each queue once upon initialization. And then when there "queue" slot executes, they get added back to the end of that queue with the next time slot as part of the process.

However, I would have to check high processing status for each person in the queue as well as "dead status". If someone isn't high processing, does the UDF just not execute? And if so, do they have to keep getting added to the queue? Considering that there are hundreds, if not potentially thousands of actors (especially with a mods that add content), there could potentially be massively long queues where most of the actors in those queues aren't in high process, but I would still have to deal with them anyway by adding them to the end of the queue to make sure they get processed the next time they pop up.

I must be missing something else, because that doesn't seem efficient at all.
User avatar
Rowena
 
Posts: 3471
Joined: Sun Nov 05, 2006 11:40 am

Post » Wed Sep 14, 2016 7:29 am

I also think there is some info missing somewhere. Lets see if an simplified example helps determine if we are in the same page.



Lets suppose we have 300 actors to deal with in a 30 fps game and we want to check each one every 5 seconds. That means we have to process 2 actors per frame.



With the queue they are in chronological order, so the first 2 are the ones you have to process. When you get to the 3rd, you see he is due in the future (next frame), so you stop right there and do not have to check the other 297. The size of the queue is not relevant.



With the timers, you have to check all 300 every frame to find the 2 whose timer ran out.



As I understand it, we are discussing the best way to find those 2 guys.



Once you found them (by either method), THEN, you decide what to do as part of their processing (dead? High processing? (1) -Queue - should he go back to the queue to be re-selected in 10 seconds / (2) - Timer - Should his timer be set to 10 seconds?.) In both cases you have to process 2 actors per frame and the decision of what to do with them is independent from the method you used to find them.



Did I understand it right?



As I see it, the larger the population, the more advantageous is the queue approach, as it is pretty much independent of the size of the population. But it only works if you can forget about the actor during the interval.

User avatar
stacy hamilton
 
Posts: 3354
Joined: Fri Aug 25, 2006 10:03 am

Post » Wed Sep 14, 2016 8:52 pm

ah. I think the problem is that there are routines that happen every 0.1 seconds that need to be applied to high processing actors (dead or alive). The reason being is that detection checks are made every .3 seconds per actor, and the detection checks are staggered. So I have to keep some data updated on a regular basis.


So with the example you provided (300 actors in the queue), the number of records in the queue that would have to be processed would be way more than just two in that one frame. .1 seconds is 10 frames at 30 fps. So to process all the .1 second intervals, assuming evenly staggered distribution, then we are talking about processing 30 actors per frame in the queue before it stops processing the rest of the queue.



Presumably it would be handled like this in the six possible scenarios that I can think of:



Actor ref is invalid: UDF will not fire


Actor ref is marked for deleting: UDF will not fire


Actor is not high processing: UDF will not fire - new entry added to the next queue point (.1 seconds later).


Actor is disabled: UDF will not fire - new entry added to the next queue point (.1 seconds later).


Actor is dead: UDF fires, processes about 30 lines of code, returns early - new entry added to the next queue point (.1 seconds later).


Actor is alive: UDF fires, processes about 100 lines of code, returns early - new entry added to the next queue point (.1 seconds later).



Once all actors scheduled for that time slot are processed, that time-slot entry gets deleted from the queue, automatically removing all the actor entries as well.



Assuming a queue of 300 actors at 30 fps, approximately 30 would get processed every frame.


Compared to capturing a list of all high processing actors in the area (which could easily be 30+ in exteriors if within a 17,000 unit radius, running through all of them every frame, and checking/updating the timers on each one...


Yes... I think I am beginning to see the point. Especially when you get to the other processes that are at longer intervals.


I think the trick is to avoid running through all the high actors when updating timers and just make sure they are added to the queue when they are first initialized. I will still have to run an all high actors check to make sure that new actors are initialized, but that doesn't have to happen every frame, only when it's been flagged to happen (which I've discussed earlier)


Is that about right?


If so, I am much more inclined to adopt your suggestion. Even though there are more arrays being created/destroyed, I think in the long run it will be more efficient that what I had originally planned to do.


EDIT:

Basically in exterior settings with large populations, the queue you mentioned will be very efficient.

In interior settings with low populations, there will be a lot of unnecessary processing since most of the actors in the queue won't be around.


I'm not sure how performance helpful it will be once the queue grows beyond 300 actors, which I think is quite possible. There are (if I recall correctly) well over a thousand NPCs and creatures in just the core Oblivion mod. At that rate, we could be talking about processing 60 to 100 actors per frame, even though maybe only 1/10th of them may actually be active and high processing.


EDIT 2:

What if there were two queues: one for exteriors and one for interiors. The exterior queue is constantly maintained, but the interior queue is generated each time the player transitions from the exterior world to the interior and changes cells? The queue would be much smaller and only last for the duration the player is in the interior location.
User avatar
Patrick Gordon
 
Posts: 3366
Joined: Thu May 31, 2007 5:38 am

Post » Wed Sep 14, 2016 5:39 pm

?





Yes. I'm aware of that and I think I understand it, mostly. I just don't have the technical lexicon to accurately quote programmers such as scruggsy or yourself just from memory. I do have some programming experience but I am still by all definitions an amateur. Thanks anyway for pointing it out :)


Edit: Just to check if I? understand what you mean, you made a little mistake at the end of your statement right?




Otherwise please clarify :)?

?

?----

?

I came up with another (more complex) way to take advantage of QQuix's method. Instead of an action identifier, the structure could have a parallel array of UDFs. Example:

?

?arQueue -> the Map array using timestamps as keys, and Arrays as values.

?arQueue[Timestamp] -> an Array with 2 Arrays as elements: one for the actors, one for the UDFs.

?arQueue[Timestamp][0] -> an Array of actors.

?arQueue[Timestamp][1] -> an Array of UDFs.

?arQueue[Timestamp][0][X] -> an actor.

?arQueue[Timestamp][1][X] -> a UDF.

?

This way you wouldn't even need to identify the action in question, and you could simply pass the UDF in a ref var to Call. As in:

?



?rActor.Call rUDF (args)

Edit: This would mostly work if the array-entry management routine is done from within the UDFs themselves, so you'd probably want to pass the relevant array data as arguments to the UDFs too. Something like...



Call rUDF rActor iKey
User avatar
Jade Payton
 
Posts: 3417
Joined: Mon Sep 11, 2006 1:01 pm

Post » Wed Sep 14, 2016 7:10 am

Interesting. I would have to make sure that the UDF in question returned a value that establishes the interval of time until the next check, so that the next queue item could be created correctly, yes?

[code=auto:0]

Let fNextCheck := rActor.Call rUDF (args)

; create next timestamp/queue/actor instance based on fNextCheck added to current timestamp to determine next timestamp entry.

User avatar
Gill Mackin
 
Posts: 3384
Joined: Sat Dec 16, 2006 9:58 pm

Post » Wed Sep 14, 2016 11:17 am


I edited my message to clarify another way to do the same thing, but yes, that is also another valid way to go about it.
User avatar
Sun of Sammy
 
Posts: 3442
Joined: Mon Oct 22, 2007 3:38 pm

Next

Return to IV - Oblivion