dev

  • More
Add new...
 
 
·
Added an album to , dev

Hi Aurélien! Hi community!

After digging into the DT software, I found this fork, and I really want to switch to it. I'm an Apple user so I cannot properly use the software at the moment.

If I understood well, you are going to include the vkdt when it is ready. Now I'm trying to add basic support for running it under macOS on M1 (the machine that I have), but I would like to help here too.

I'm searching for cross-platform solutions also for the GUI, and the Dear imGUI used in vkdt seems a good option.

If appreciated, and you think it could be a good path for the project, I would like to try implementing it also for DT. I'm not an experienced developer, but in my spare time, I like doing this kind of thing, especially if they are related to my photography hobby.

Let me know what you think about it and if I should take particular things into consideration during the preliminary design.

Best regards,

Luca

  • 75
·
Added an album to , dev

Hi Aurélien! Hi community!

After digging into the DT software, I found this fork, and I really want to switch to it. I'm an Apple user so I cannot properly use the software at the moment.

If I understood well, you are going to include the vkdt when it is ready. Now I'm trying to add basic support for running it under macOS on M1 (the machine that I have), but I would like to help here too.

I'm searching for cross-platform solutions also for the GUI, and the Dear imGUI used in vkdt seems a good option.

If appreciated, and you think it could be a good path for the project, I would like to try implementing it also for DT. I'm not an experienced developer, but in my spare time, I like doing this kind of thing, especially if they are related to my photography hobby.

Let me know what you think about it and if I should take particular things into consideration during the preliminary design.

Best regards,

Luca

  • 75
·
Added a post to , dev

I accidentally discovered that the Linux build script used a "package" build, meaning the CPU optimizations are limited to generic ones in order to produce portable binaries that can be installed on any x86-64 platform. By "using", I mean the package build was not explicitely disabled, so it was enabled by default.

Anyway, this is now disabled by default, since the actual packages (.exe and .appimage) are not built through that script, which is primarily meant to help end-users. To get the previous behaviour back, you would need to run:

$ sh build.sh --build-package --install --sudo

Not using the package build option may increase performance on CPU by 20 to 30 % depending on your hardware, thanks to platform-specific optimizations.

I have also introduced a new argument that will launch the Git updating commands that users seem to forget all the time. There is a caveat, though : updating the source code by calling Git from within the script doesn't update the script for the current run, so this method doesn't work when the script itself is modified. The argument to update the source code and the submodules (Rawspeed, Libraw) :

$ sh build.sh --update --install --sudo

I have also modified the internals of that script in order to automatically : 

  • update the Lensfun database of lenses,
  • add a global system shortcut (.desktop file) so the software will be globally available from the app menus,
  • add a global system command so the ansel is globally available from the terminal.

The goal of all these changes is obviously to make it more user-friendly to use a self-built version of the software, allowing to improve performance, especially for computers without GPU. The one-pit-stop command would be :

$ sh build.sh --update --install --sudo --clean-all

But of course, you will need to run the Git update manually one last time before, to update the script itself :

$ git pull --recurse-submodule

Alternatively, you can directly download the build script, and replace the old build.sh one at the root of the source code directory.



  • 765
·
Added a post to , dev

2022 was so bad in terms of junk emails and noise that I started the Virtual Secretary, a Python framework to write intelligent email filters by crossing information between several sources to guess what incoming emails are and whether they are important/urgent or not. When I'm talking about junk emails, it's also Github notifications, pings on pixls.us (thank God I closed my account on that stupid forum), YouTube, and direct emails from people hoping to get some help in private.

Having become "the face" of darktable, mostly because I'm one of the few to bother providing user education and training instead of just pissing code, I didn't see that coming, and I wasn't prepared. A lot of people now mistake me with the front desk, which doesn't help abstract thinking on coding matters, let alone taking time to actually produce art. The problem is all the time lost dealing with info/noise/input is not spent solving problems, and time is the only thing for which you cannot get a refund.

After a while, I figured it would be nice to extend the Virtual Secretary with a machine-learning classifier, which would guess in what folder incoming emails should go, by extracting the content of the emails already in said folder. It's actually much easier to implement than what I thought, but the time-consuming bit is to write text filters to clean-up the input (because garbage in, garbage out, especially for spam emails which are generally improperly formatted).

But the ultimate goal, in my wildest dreams, was to build an autoresponder for people asking questions already answered on one of the many websites I have contributed to over the years. It's a constant frustration to see that all the pages of doc I have written over the years are lost in Internet limbo. On FLOSS-centric forums, benevolent guys also tend to experience the same kind of fatigue : repeating again and again the same info, linking the same pages, to never-ending hords of newbies who don't know what to look for. Just look at Reddit darktable : every 14 days, someone else asks why the lighttable thumbnails don't look like the darkroom preview. Even discarding the amount of frustration and angryness here, the number of man-hours lost in repeating is outstanding. Just because information is lost.

The true problem of search engines is you need to know what keywords to look for. Which is circling back to the fact that newbies don't know the slang. So they don't know what to look for. They don't have any entry point in the matrix. Except other humans. Which sucks for the ones having to do the work, usually for free.

After merging a neural layer of word2vec word embedding (big words to say it's unsupervised machine learning finding how words are contextually related in sentences, that is finding syntactical structures, synonyms and the likes), as a first step in my email classifier (which is now up to 92 % accuracy), I wondered if this wouldn't been usable to build a context-aware and synonym-aware search engine, able to look past exact keywords.

Turns out a couple of guys from Bing had the same idea in 2016, and published their maths, so I implemented them. Then proceeded to add a web interface on top. That gave birth to Chantal, the AI you are kindly asked to bother before bothering me. The current version is trained against 101.000 internet pages from my own websites, darktable & Ansel docs, along with some reliable color-science ressources. It indexes 15.500 pages in French and English and can process search queries in either or both of these languages. One of its mean features is to propose you a list of keywords associated to your query, so you can refine/reorient/try things you wouldn't have thought of before.

Hope that helps.

That work showed me how poorly indexable many websites are. To account for the lack of XML sitemap on forums.darktable.fr and color.org, I had to write a recursive crawler. But even then, many pages don't have description meta tags and a proper date tag. It means you need to use regular expressions and indirect methods trying to identify the metadata, and manually tune the HTML parser to extract the actual content part of the webpage (discarding sidebars, menus, asides and advertising if any).

Then, you get to love Q&A forums like Stack Overflow, where proper questions start a thread, proper answers follow, and the best answers are selected by the community. "Thank you" and "me too" messages are explicitly forbidden in the conditions of use. On forums like pixls.us or forums.darktable.fr, proper technical information gets lost in the middle of semi-technical rambling, life stories and bros bonding over tales of software, in a continuous thread where nothing distinguishes relevant from irrelevant, accurate from inaccurate, and gross misunderstandings of color theory. From a machine crawling perspective, there is very little to exploit here, and investing time on such platform is a dry loss.

  • 1140
·
Added a post to , dev

It's been roughly 3 months that I rebranded "R&Darktable" (that nobody seemed to get right), into "Ansel", then bought the domain name and created the website from scratch with Hugo (I had never programmed in Golang before, but it's mostly template code).

Then I spent a total 70 h on making the nightly packages builds for Windows and Linux work for continuous delivery, something that Darktable never got right ("you can build yourself, it's not difficult"), only to see the bug tracker blow up after release (nothing better than chaining the pre-release sprint with a post-release one to reduce your life expectancy).

People keep asking for a Mac build because they have no notion of the amount of work it requires while the Brew package manager breaks lib dependencies on a weekly basis when you are not lucky. Mac OS simply requires an unreasonable amount of care, which becomes a dry loss when you know that not even 9 % of Darktable users run it. Also, for the last time, Github (actually, the Microsoft Azure instances providing Github actions runners) has no ARM system, so anyway a nightly Mac build would necessarily be on AMD64 architecture, that is old MacBook from before Apple decided once again to go full Apple on its own island. Don't expect 90 % of the free world to scurry over a tech nobody needed and barely anybody uses.

From then, I have optimized the local laplacian in highlights reconstruction with a stupid trick : processing a downsized image instead of the full-resolution one. I had this idea in the back of my mind for a long time but feared the detrimental side-effects. But since clipped areas are signal-less anyway, processing a slightly blurrier version is almost invisible. Also, the shoulder of your typical S/filmic tone curve will anyway compress everything close to white, so it reduces percieved sharpness by reducing contrast in highlights no matter what. We are talking 96 % speed-up on CPU (mostly because we can process the image at once with no tiling).

Using that, I developed an experimental noise and chromatic aberrations pre-filter re-using multi-scale guided laplacians. It's not bad, but again quite slow.

Since February, most of the work has been spent on cleaning up the GUI by moving collections of buttons, either the full-text ones or the weird icon ones, to the global menu and rewiring the keyboard shortcuts to that. It makes feature more discoverable while reducing screen real estate.



  • 751
·
Added post to , dev

The Intel OpenCL driver (Neo) has been unblacklisted on Windows. I have tested it a couple of times in 2022 and it seems to work, I see no reason to keep the ban. Package will be available tomorrow.

  • 343
... or jump to: