Not possible to change backup folder

I don’t think it would have had much to say if either of us had joined the “beta” or not.

You can’t view your data in the application, I can’t export to pdf or images…
the pdf version used is old and has pixel limitations for dimensions… so even though it is a vector image it have constraint because of the pdf version used… instead of just creating a svg vector image directly…
Well, I am glad I use a 4k monitor, I can at least take some print screen if I want to add the timeline in a document or something…

I’m sure they’ll fix these things eventually… I really hope they will fix the export problem first of all, because making backups, that I can do with a script.

For now, wouldn’t it be a solution to tick “Disable automatic backup” and set up external backup software such as rsync or Backup Service Home? For me, proprietary backup solutions that are integrated into applications are a fallback at best.

Peter

Already done. But no need to disable the function, I just use it as a versioning function instead of backup. Fallback if I should do something really stupid…

but it is still a game stopper regarding functionality because it can’t be set pr. project, so now I get “backups” of “auto save” files, I get “backups” of tests etc.
So if I should test out a large import of data, I would need to turn of the “backup”, test the import in a test file, delete the test file, even if the import went well, turn on the backup again so that the “live” projects get “backed up”… I don’t need backup/versioning of test databases?

Another problem is that even if I have my timeline projects in another location, the “backups” still get saved to the default location on “C:”, and for some reason I also get “backups” of some “auto save” files…

Using the “save as” to take a manual backup is not a good solution either, because when you have done a “saved as” that is the project file that is actually open, so you will need to close that file and open the “original” again, if you forget it and start editing/adding, you actually end up working on what you wanted as a backup… and you get “backups” in the “appdata\local\Aeon Timeline\Backup” folder on “C:” when saving those changes.

Well, those are a lot of good arguments to turn off automatic backup, aren’t they?

In another thread users tell about gigabyte-sized timelines (no idea how to create something like that. With high resolution images maybe?), so automatic backups might get tight at some point anyway.

The discussion made me curious, and I took a closer look at auto backups. It is remarkable that the read-only variant of Aeon 3 (trial period expired) still saves two full-size copies of each opened project.

The next surprise: Aeon Timeline 2, which I am still using, has also accumulated a nice amount of backup files that I have not been aware of yet. It looks like there is even no option to turn this off. But at least you can choose the location, and access is intended via the file system, i.e. via Explorer. That’s what you wanted, isn’t it?

yes, I will have to do it for my “live” projects, but at the moment I have stopped using AT3 for anything other than testing and trying to build a Configuration/Template for my research objects… need to make a lot of workarounds to get the result I want, if I shall use Aeon for this…
And I can’t use Aeon for anything “live” because I have no way of exporting the results, except CSV… and for that I could just use Excel…

I understand the problem. For special solutions you can program scripts and macros as I have presented them elsewhere here in the forum. However, the exported csv files do not contain all the information to resolve more complex dependencies (e.g. the short labels used for relationships). Personally, I prefer to access the native file format directly. With the new .aeon format, at least read access looks feasible to me. I have started with a Python module for this purpose, but at the moment I don’t want to pursue the topic because November, and thus NaNoWriMo, is approaching.

I did this in AT2 by extracting the json file from the zip,

I have just taken a quick look at the ver3 file, and find that there is a lot of internal references, I am not sure if I want to start trying to transcode that to a network graph or other formats, I am not that good at using neither Python nor other programming languages and really hoped for using more time researching and finding data, than programming my own solutions… I just cant manage to get my head spinning around creating the code, even though I know how to I wants things in theory…

I have looked at your scripts, but not tested them, I can see that multiple of them would be of great use for writers that do not use Scrivener, i…e. your yWriter scripts etc.

That is exactly the point. Parsing JSON or CSV is a piece of cake with the standard Python library. And for extracting the JSON part from the .aeon file, I have a robust solution by now. The rest will have to wait at least until my next writer’s block. For the moment there is a thriller in the queue.

Cheers,
Peter

1 Like

Then I hope you release that script too when you find yourself ready, it might be that I can learn something…

And… maybe it will be possible to combine data from Gramps with data from AT3 in a network graph next year or so… :clap: :clap: :pray: :pray:

Sure. Anyway, my paeon project is open source, so publicly available.
The current timeline modules map the timeline to the yWriter novel metamodel, but the JSON extractor in aeon3_fop.py is generic.

I just put a standalone Python script online for extracting and pretty-printing the JSON part from an .aeon or .aeonzip file. A Python installation is required. The script has a command line UI, but It should also work via drag-and-drop.
Beware: It could possibly weaken with gigabyte-sized projects.

1 Like

This is off-topic now: I had a quick look at Gramps and was impressed by the variety of import and export options, as well as the documentation and tool support. I would be interested to know how you envision connecting Aeon and Gramps. From the looks of it, Gramps has people, places, and events, among other things. With database reports you should be able to generate a variety of views. Furthermore, there is XSLT support and everything your heart desires Is it primarily about seeing a timeline, or is it also about seeing relationships as in Aeon’s spreadsheet view? Or is it about showing events that are not stored in Gramps?

By the way, I have now extended my JSON extraction script to process .aeonzip files as well.

1 Like

I use Gramps as a storage for my “researched objects”, mostly because of all the advanced feature, even though there is some limitations still… and not only for “genealogy objects”, Gramps is an advanced software, but with a few additions it can be a near complete historical research software…
All it needs is:

  • Main and Sub Events
  • Places as subject for Events (coming)
  • Better support for interchangeable citation/bibliography (i.e. CSL) (I think they are working on it even though some of the “developers” call me an idiot for asking for it, because Zotero can’t do what they want to have in their citations, while I have tried to tell them that they can create a new CSL Citation Style for Gramps if they like, that can be used by other software supporting CSL, but some of them is still trapped in the Zotero thing because I mentioned Zotero as an example at a time)…
  • Some export to interchangeable standard formats, I used graphml, json-ld and CSL-json as examples… I think it will be an export feature to GEXF in near feature though.

At the moment there are little analyzing features in Gramps, therefore I use Cytoscape to look for relations and links between other objects, places, events and people that can’t be viewed in a “only family relations” software.
And Cytoscape and other network graph software (Constellation has a timeline module and a map module, but I have not tested it for my purpose yet) do not have a timeline view for where in time Events and objects existed, so I started to use The Timeline Project for that, then I found AT2 an liked the visuals of it, with a timeline it is possible to see if 2 objects could have been at the same place at the same time, or if i.e. two similar names was so far apart in a given period that there is no way it could be the same person… even though I don’t have exact dates… i.e. a seaman on board a ship, could he have been signing on another ship in a given port at a given time… or as one of my main problems was when I started with this ships and voyages research, how did a seaman be signing on in Kristiania, Norway on a given ship at a given date, when that ship actually was on a voyage between Tampico, Mexico and Providence, US, did any ship leave Norway for any City near Providence that could reach a port in that area within a given period, or did the ship he signed on go to port in other cites to…

I have got a lot of those answers using a combination of AT2, Obsidian with JUGGL and Cytoscape, but to combine results from 3-4 software and getting it in to the 5, without knowledge of programming, has been a hassle, and mostly been done manually, but when I now start to add multiple thousands of Norwegian Merchant ships to the research, I don’t want to do manual anymore or at least as little as possible, since I need to transcribe old Norwegian newspapers for that information, the OCR for those letters is just not good enough, and trying to train Tesseract with new fonts etc. is way outside my base of knowledge, and since this is tables in a newspapers, I either need to correct the errors in Excel, or in a markdown table, and that work is huge…
Approx. 1 newspaper every week with those lists, linking ships, dates and locations (countries, ports of call) together and creating something structured out of it…

So most of my use of any software is for registration and analyzes of data, but the Weekly Ship Index I really would like to publish as Open Data at a point, as a network graph/knowledge graph with a timeline view of all the ships and some of the Crew Lists and Manifests, information about the owners and lines etc. At the moment I mostly use Obsidian for my research “notes”, because of the Juggl addon that can save the graph to a cystoscape.json file. But nearly all addons/templates for publishing to static pages use github, and I am not sure I want to use github for this…

Much of this is already out there, but nearly everything is for the WW2 period, and ships sailing under the war…

And the reason I use Obsidian is of course also that Foam is starting to mature, so there will be a “backup” software that can utilize most of the YAML and Graph settings with little rewriting…
I hope Zettlr would support wikilinks with pipe aliases like wiki use it, but not sure if that gonna happen… but there is a addon for Obsidian that can convert links both ways, so…

Oooh… I also used Twine 2 in a period because of the Graph View and exported it to a graphviz file, got the developer of the import module of dot files in cytoscape to support unicode… and Gramps export a few reports to graphviz, so there was some possibilities to combine those… but it is a hassle to use a dozen different software… so I try to limit it…
I used/use Freeplane for some things, but for someone that cant much scripting, there is limitied possiblities to get an export from Freeplane that are real interchangeable with anything but other Mind Map tools.

I’m not getting through it all at the moment, but it sounds very interesting. Is there another place to continue this off-topic discussion without blowing up this bug report?

Just very briefly, Gramps uses a relational database it looks like, so that should allow for very flexible analysis. Have you tried SQL and a powerful 3rd party client? I’d probably lose track with so many different tools. On the other hand, if I had a project as large as yours, I’d most likely set up a custom database on a fast server.

The Timeline Project, although compared to Aeon with significantly fewer features, has the advantage of a very simple XML file format, which can also be generated by yourself. I have already done something like that for yWriter, where synchronization is possible in both directions. For Gramps, with a little luck, it might even work with XSLT. With this you could create a timeline view from Gramps pretty fast. Since Gramps has a Python API for plug-ins, a script-based export to TL format would also be imaginable.

Yes, it use a database backend, but it use Python pickles (or what it is called) to store serialized data in BLOBS (its built for Berkley, but they have start using sqlite), they are talking about changing the serialized data to json strings, they also have a experimental mongodb backend that don’t use blobs or “pickles”…

Using sql to query the data directly is not possible because of thatbut there is a sqlite export… some use that…

You are right, the file format of Gramps even seems to be version dependent. For the update to version 5, it even looks like the database has to be migrated manually via exchange format.

Your project appears to me to be extremely challenging. Is an application like Gramps, which is actually intended for a different use case and moreover uses a proprietary file format, the right one? Basically you just need a rather simple data structure, but longevity, high performance and flexible evaluation methods, don’t you?
If it’s not a one-man project, a client-server architecture would be advantageous, so that the data can be kept somewhere central and generally accessible, independent of the individual evaluation.
Just some thoughts from a systems engineer …

Apart from that, I’ve made a little progress with unraveling .aeon files.

first:
You don’t need to update the Berkley database in version 5, and it do upgrade it from earlier versions. The version dependencies is based on the Python version used…
It is also easy to convert to sqlite.
The move via export/exchange formats is just recommended due to clean ups, but for really old databases you will need to go via the xml…
It is actually a very clean way of ding it, instead of risking corruption and crashes…
Upgrading from 4 to 5 was not a big deal for me, not even on Windows, converting from Berkley to sqlite, postgresql or mongodb was neither a big deal… but of course there is always some people that have trouble…

Yes, you are right, it would be better with a database system, but most server solutions do not have any kind of good visualization… Therefore I am thinking about using Excel as an input, and export to markdown, because manually adding rows of data in Excel is really simple, and I have already found a script that can convert a csv to markdown notes, one note for each row, with features to chose columns as YAML frontmatter keys etc. There is also Excel addons that creates graphviz graphs, and even the NodeXL addin for network graphs if needed…
And if that doesn’t work, even I can create a simple VBA that can export to markdown as I need it formatted.
Another benefit with Excel is that it can easily import nearly every xml with minimal programming (but some work), and it can actually export to the same formatted xml also (with a little more work).
I can also use Excel as a front for a full featured database if needed… so it might be possible to combine even more usage as the time goes…
Last thing about Excel is that I actually can get thing done in Excel…
I am also using Openrefine.

I actually find csv more and more usefull, as long as it is not a csv from Freeplane… haha

I am thinking about trying to use Arches, I have also looked at Openmappr, Omeka, Heurists etc.

I don’t use Gramps for this project at the moment, but if the feature I mentioned is added, it will be a good software for this type of projects to, because of its hierarchal place registration and the way you can relate people to other people even though they are not in family… and it is possible to add Events without first adding a person for that event…
but as I mentioned, it is some limitations still…

Another thing is that Gramps has experimental database backends for both mongodb and postgresql, the postgresql is more or less adopted now, and when they move away from the blobs, it will be easy to utilize from 3rd party systems…

It is no doubt that Gramps is the most advanced genealogy software, but the learning curve is steep…
Multi-user will most likely come in a future version , but they need time to turn around changing out some older libraries… and maybe understand what potential the software actually have…
Personally I think that if they did the changes I mentioned, they would get more researchers (not only hobby genealogy researchers) coming to use the software, and with researchers in i.e. humanity, historical social research etc. there is a lot of people with Python experience, and maybe they will get more research knowledgeable programmers to help with the development of the software and new features…


Regarding timelines in genealogy software, its used a lot, but most software only have some kind of table format, listing the events from start to end…
Progeny Genealogy have Genlines, but it is based on family relations and only read gedcom or databases from the “top 3-4” genealogy software, no easy way to add events… and no way to add events not related to a person…


Sidetrack: Have you ever tested Causality
or Constellation
or Running Reality (maybe not that much for fiction?)

Just a few of the thousands of open source free software I have found on my way trying to find “Utopia” in the “research software world”…

Oooh, and I do know that I could get anything I would with R and Shiny :slight_smile: :slight_smile: or Plotly and Dash, but I have just not managed to wrap my head around it…


PS.

@matt has already answered this post, so I am not afraid of any off topic discussion as long as it is about workflows regarding how to visualize research…
But if he wants to move this part of the thread to a more suited group, he is most welcome to do so :slight_smile:

Well, sure, it’s your show. Besides, it’s certainly fun to dig up all the software treasures.

As for database visualization, my idea would be this: for example, export a small subset as a CSV-formed report that can be imported into Aeon Timeline. The point of this is that the Aeon project is a disposable product for pure visualization, so to speak, and that its data export capabilities don’t matter at all (if I recall correctly, the complaint about Aeon’s limited export capabilities was somewhere at the beginning of our discussion). The same applies to the other visualization software.

I looked at Causality a long time ago; I liked it a lot, but it is heavily specialized in screenwriting. Its basic idea of visualizing setup and payoff may now also be achieved with Aeon Timeline 3.

In the meantime I have completely changed my Aeon 2 to yWriter converter from csv to .aeonzip format. That means yWriter projects can now be created without the detour via Aeon’s CSV export. Before the release, however, I want to do a little more testing.
The processing of the TL3 .aeon format is done in the same way, except that there is an additional layer, and accordingly more UIDs have to be collected. It is only a question of time …

Anyway, November is approaching, and with it the end of toying around and the beginning of real writing.

1 Like

It will absolutely be possible to create an addon (gramplet) for Gramps that can utilize the xml from The Timeline Project, It should also be possible to make a gramplet that can read/write to the AT3 format, like you do for your yWriter project.
But I do not have the knowledge and patient to sit down and learn Python, even though I know the theory around programming and the logics, I just can’t manage to focus my thoughts on it… even though I can sit for hours searching for things in newspapers… it’s strange how our brain works…
(Maybe I should write a novel about it, I am sure it could have been a great science action real dark story at the end… hahaha, but actually, that is one of the reasons why I want to have my research in the same timeline as my research object, and linked together, to record the parallell stories, when did I work on what object, how much time, what did I do etc., so I am actually creating a AT3 template for that… if I get it working as intended, i will share it…)

I will look at your code and see if I can learn something when you publish it… and maybe I can ask for som help/advice, if I see that I maybe can figure out something…

I also used to use Clooz, and are testing Clooz4 beta, Clooz is a document focused research and register software for genealogy research, but I think the developer wants to do “more” with it as soon as he have has finished version 4… again the problem of using multiple software appear, different really good and helpful feature but in different software…

This is the code that iterates through Aeon 2’s JSON data and builds a yWriter novel structure from it.
In the meantime, it seems feasible to me to create an .aeonzip file from a yWriter project the other way around. But now NaNoWriMo has priority.

Funny, with me it’s the other way around: Even when I’m feeling so bad and all creativity fails, I can still program all day. That’s what’s kept me going for a living.

Go for it! Here’s the place to be in November.

Cheers,
Peter