Friday 29 December 2023

The DUAL table

I'm working with Oracle databases (version 19 for now), and I always wondered about the weird DUAL table1 that I require in my work.

Things like this:

select sysdate from dual;
select seq_s_ordertable.nextval from dual;

Well, apparently we have Charles "Chuck" Weiss2 to thank for it.

One of the reasons is that the FROM clause in Oracle SQL syntax is mandatory.

Interestingly, Charles originally created the DUAL table with two rows, hence the name DUAL. Nowadays it's one row, but the name's stuck.

It's part of the Data Dictionary of the SYS user. Changing the DUAL table will cause problems!3

References

[1] Wikipedia - DUAL table
https://en.wikipedia.org/wiki/DUAL_table
[2] Wikipedia - Charles Weiss
https://en.wikipedia.org/wiki/Charles_Weiss
[3] Ask tom - All about the DUAL table
https://asktom.oracle.com/ords/f?p=100:11:0::::P11_QUESTION_ID:1562813956388

Thursday 21 December 2023

Using Java to Zip/Unzip files

There's been a ZipFile class in Java sinds a long time. But nowadays sinds JDK7 there's also a ZipFileSystem which is a bit easier to work with in some cases and can do more things.

Below are two examples, one using ZipFile and one using ZipFileSystem.

Using ZipFile

Unzipping works as follows:

  @Test
  public void testOldZip() throws URISyntaxException, IOException {
    URL resource = UnzipperTest.class.getResource("/test.zip");
    assert resource != null;
    Path path = Path.of(resource.toURI());
    try (var zipFile = new ZipFile(path.toFile())) {
      Enumeration<? extends ZipEntry> entries = zipFile.entries();
      while (entries.hasMoreElements()) {
        ZipEntry nextElement = entries.nextElement();
        if (nextElement.isDirectory()) {
          log.add("File " + nextElement.getName() + " from zipfile is a directory - skipped");
        }
        else if (nextElement.getName().contains("__MACOSX")) {
          log.add("File " + nextElement.getName() + " from zipfile is a MACOS file/dir - skipped");
        }
        else {
          log.add("File " + nextElement.getName() + " retrieved from zip.");
          var filename = nextElement.getName();
          InputStream inputStream = zipFile.getInputStream(nextElement);
          result.add(new FileContents(filename, inputStream.readAllBytes()));
        }
      }
    }
    assertThat(result).hasSize(6);
    assertThat(result.stream().map(x -> x.filename).collect(Collectors.toList())).isEqualTo(
        List.of("test/test1.txt",
            "test/test/ziptest.zip",
            "test/test/ziptest/zippem2.txt",
            "test/test/ziptest/zippem.txt",
            "test/test2/test2.txt",
            "test/test2/text3.xml"));
    assertThat(log).hasSize(11)
        .isEqualTo(List.of("File test/ from zipfile is a directory - skipped",
            "File test/test1.txt retrieved from zip.",
            "File test/test/ from zipfile is a directory - skipped",
            "File test/test/ziptest.zip retrieved from zip.",
            "File test/test/ziptest/ from zipfile is a directory - skipped",
            "File test/test/ziptest/zippem2.txt retrieved from zip.",
            "File test/test/ziptest/zippem.txt retrieved from zip.",
            "File test/test2/ from zipfile is a directory - skipped",
            "File test/test2/test2.txt retrieved from zip.",
            "File test/test2/test3/ from zipfile is a directory - skipped",
            "File test/test2/text3.xml retrieved from zip."));
  }

Using ZipFileSystem

Apparently it is also possible to mount a zip file as a FileSystem (sinds JDK 7).

It's a bit easier to work with, and has less problems with leaking resources/streams, and allows easy editing and removing items from the zip.

On the other hand, some FileSystem operations are not available in a ZipFileSystem, and then you get a UnsupportedOperationException.

  private static class ZipVisitor
      extends SimpleFileVisitor<Path> {

    private final List<FileContents> result;
    private final List<String> log;

    private ZipVisitor(List<FileContents> result, List<String> log) {
      this.result = result;
      this.log = log;
    }

    @Override
    public FileVisitResult visitFile(Path file, BasicFileAttributes attrs) throws IOException {
      if (file.toString().contains("__MACOSX")) {
        log.add("File " + file + " from zipfile is a MACOS file/dir - skipped");
        return FileVisitResult.CONTINUE;
      }
      if (Objects.equals(file.toString(), ".DS_Store")) {
        log.add("File " + file + " from zipfile is a .DS_Store file/dir - skipped");
        return FileVisitResult.CONTINUE;
      }
      log.add("File " + file + " retrieved from zip.");
      String filename = file.toString();
      // file.toFile() -> unsupported operation.
      result.add(new FileContents(filename, Files.readAllBytes(file)));
      return FileVisitResult.CONTINUE;
    }
  }

  @Test
  public void testNewZip() throws URISyntaxException, IOException {
    URL resource = UnzipperTest.class.getResource("/test.zip");
    assert resource != null;
    var path = Path.of(resource.toURI());
    try (FileSystem filesystem = FileSystems.newFileSystem(path, Collections.emptyMap())) {
      var rootDirectories = filesystem.getRootDirectories();
      rootDirectories.forEach(root ->
              //walk the zip file tree
          {
            try {
              Files.walkFileTree(root, new ZipVisitor(result, log));
            }
            catch (IOException e) {
              throw new RuntimeException(e);
            }
          }
      );
    }
    assertThat(result).hasSize(6);
    assertThat(result.stream().map(x -> x.filename).collect(Collectors.toList())).isEqualTo(
        List.of("/test/test2/text3.xml",
            "/test/test2/test2.txt",
            "/test/test/ziptest/zippem.txt",
            "/test/test/ziptest/zippem2.txt",
            "/test/test/ziptest.zip",
            "/test/test1.txt"));
    assertThat(log).hasSize(6)
        .isEqualTo(List.of("File /test/test2/text3.xml retrieved from zip.",
            "File /test/test2/test2.txt retrieved from zip.",
            "File /test/test/ziptest/zippem.txt retrieved from zip.",
            "File /test/test/ziptest/zippem2.txt retrieved from zip.",
            "File /test/test/ziptest.zip retrieved from zip.",
            "File /test/test1.txt retrieved from zip."));
  }

References

fahd.blog - Java 7: Working with Zip Files
https://fahdshariff.blogspot.com/2011/08/java-7-working-with-zip-files.html
Oracle Java Documentation - Zip File System Provider
https://docs.oracle.com/javase/8/docs/technotes/guides/io/fsp/zipfilesystemprovider.html

Thursday 14 December 2023

Streams and Filters

A simple little thing.

I like to use streams and filters, and I was wondering what's the best way to go about some things.

For example: I wish to search for a person in a list of Persons.

  @Test
  public void testSimple() {
    Person personToFind = new Person(null, "Mr.", "Bear", "Netherlands");
    Person otherPersonToFind = new Person(null, "Linda", "Lovelace", "England");
    assertThat(Persons.get().stream()
        .filter(person -> Objects.equals(personToFind.name(), person.name()) &&
            Objects.equals(personToFind.surname(), person.surname()) &&
            Objects.equals(personToFind.country(), person.country()))
        .findFirst()).isPresent();

    assertThat(Persons.get().stream()
        .filter(person -> Objects.equals(otherPersonToFind.name(), person.name()) &&
            Objects.equals(otherPersonToFind.surname(), person.surname()) &&
            Objects.equals(otherPersonToFind.country(), person.country()))
        .findFirst()).isEmpty();
  }

This works, but the single filter with all the && in it seems a bit unreadable.

Of course, I could replace the x&&y&&z by three filters, as it boils down tot the same thing.

  @Test
  public void testSimple2() {
    Person personToFind = new Person(null, "Mr.", "Bear", "Netherlands");
    Person otherPersonToFind = new Person(null, "Linda", "Lovelace", "England");
    assertThat(Persons.get().stream()
        .filter(person -> Objects.equals(personToFind.name(), person.name()))
        .filter(person -> Objects.equals(personToFind.surname(), person.surname()))
        .filter(person -> Objects.equals(personToFind.country(), person.country()))
        .findFirst()).isPresent();

    assertThat(Persons.get().stream()
        .filter(person -> Objects.equals(otherPersonToFind.name(), person.name()))
        .filter(person -> Objects.equals(otherPersonToFind.surname(), person.surname()))
        .filter(person -> Objects.equals(otherPersonToFind.country(), person.country()))
        .findFirst()).isEmpty();
  }

But my colleague always likes to use specific methods for lambdas, even if they're only a little complex. It just reads easier.

private boolean compare(Person person, Person otherPerson) {
    return Objects.equals(otherPerson.name(), person.name()) &&
        Objects.equals(otherPerson.surname(), person.surname()) &&
        Objects.equals(otherPerson.country(), person.country());
  }

  @Test
  public void testSimple3() {
    Person personToFind = new Person(null, "Mr.", "Bear", "Netherlands");
    Person otherPersonToFind = new Person(null, "Linda", "Lovelace", "England");
    assertThat(Persons.get().stream()
        .filter(person -> compare(person, personToFind))
        .findFirst()).isPresent();

    assertThat(Persons.get().stream()
        .filter(person -> compare(person, otherPersonToFind))
        .findFirst()).isEmpty();
  }

My personal preference is the last one.

Thursday 7 December 2023

Is Stream.findFirst() Short-circuited?

So, the assumption is: if I use findFirst on a stream, none of the items in the stream after the first match are evaluated.

I assumed that it was, and it is, but it's always nice to see this verified in a simple test.

  private List<String> list = new ArrayList<>();

  private boolean add(String message, boolean returnValue) {
    list.add(message);
    return returnValue;
  }

  public boolean check() {
    List<Supplier<Boolean>> checks = new ArrayList<>();
    checks.add(super::onLeave);
    checks.add(() -> {
      list.add("First expression");
      return true;
    });
    checks.add(() -> {
      list.add("Second expression");
      return true;
    });
    checks.add(() -> {
      list.add("Third expression");
      return false;
    });
    checks.add(() -> {
      list.add("Fourth expression");
      return true;
    });
    checks.add(() -> {
      list.add("Fifth expression");
      return true;
    });
    checks.add(() -> {
      list.add("Sixth expression");
      return false;
    });
    return checks.stream()
        .filter(t -> t.get().equals(Boolean.FALSE))
        .findFirst()
        .isEmpty();
  }

  @Test
  public void testShortCircuit() {
    assertThat(check()).isFalse();
    assertThat(list)
        .containsExactly("First expression", "Second expression", "Third expression");
  }

As this test passes, it seems that way.

In the very beginning it took some time for me to wrap my head around it, but the operations you define on a stream (.map, .filter, etc.) are not all processed on every item in the stream.

All operations are processed on the first item of the stream, then on the second item of the stream. From this it follows, that a .findFirst() operation will immediately terminate operations if it finds one and the rest of the stream will be ignored.

Thursday 30 November 2023

Refactoring

So, I saw some code that I didn't like, and I decided to refactor it.

The code was thusly.

public boolean onLeave() {
    boolean valid = super.onLeave();
    Account account = database.getAccount();
    ShoppingList shoppingList = database.getShoppinglist(account);
    if (valid) {
      valid = !shoppingList.isEmpty();
    }
    if (valid) {
      valid = !blockedAccounts.contains(account);
    }
    if (valid) {
      String country = account.getPerson().country();
      valid = webshop.alsoShipsTo(country);
    }
    if (valid) {
      valid = account.hasCreditcardAttached() ||
          account.hasPrepaidcardAttached() ||
          account.hasCash();
    }
    return valid;
  }

And I thought to myself, it's really just a list of expressions, where each one is evaluated until you find one that evaluates to false, and then you stop.

So, that's basically a Stream where you filter out all the "false" values, and get the first one.

So, I decided to refactor it just like that.

public boolean onLeave() {
    Account account = database.getAccount();
    ShoppingList shoppingList = database.getShoppinglist(account);

    List<Supplier<Boolean>> checks = new ArrayList<>();
    checks.add(super::onLeave);
    checks.add(() -> !shoppingList.isEmpty());
    checks.add(() -> !blockedAccounts.contains(account));
    checks.add(
        () -> {
          String country = account.getPerson().country();
          return webshop.alsoShipsTo(country);
        });
    checks.add(() -> account.hasCreditcardAttached() ||
        account.hasPrepaidcardAttached() ||
        account.hasCash());

    return checks.stream()
        .filter(t -> t.get().equals(Boolean.FALSE))
        .findFirst()
        .isEmpty();
  }

I proudly showed this to my colleagues, and they immediately shook their heads in disgust.

Apparently, in my eagerness to use Lambdas and Streams and all that cool stuff, what I really had done is recreated the short-circuit version of an If statement in Streams.

I find myself turning to Lambdas and Streams when in reality these are not needed, and my eventual solution works fine without them.

So, rewriting this as a short-circuited IF statement looks like this:

public boolean onLeave() {
    Account account = database.getAccount();
    ShoppingList shoppingList = database.getShoppinglist(account);

    return super.onLeave() &&
        !shoppingList.isEmpty() &&
        !blockedAccounts.contains(account) &&
        webshop.alsoShipsTo(account.getPerson().country()) &&
        (account.hasCreditcardAttached() ||
            account.hasPrepaidcardAttached() ||
            account.hasCash());
  }

Granted, a few more and my IntelliJ will start to complain about the number of expressions in the if statement, and it's possible to clean it up a little by creating separate methods for some of the expressions. But I feel it looks fine.

So, in conclusion, just because you know something cool and shiny, it's no reason to try and use it everywhere!

It is a corrolary that in order to properly use something, you must have some knowledge or experience of when and where it's best to be used.

Thursday 16 November 2023

Adding a device to a raid with mdadm

Reordering partitions to make room

So my current hard drives are:

# lsscsi
[0:0:0:0]    disk    ATA      Samsung SSD 860  1B6Q  /dev/sda 
[4:0:0:0]    disk    ATA      WDC WD4003FZEX-0 1A01  /dev/sdb 
[5:0:0:0]    disk    ATA      WDC WD5000AAKS-0 3B01  /dev/sdc 
[5:0:1:0]    disk    ATA      ST2000DM001-1CH1 CC29  /dev/sdd 
[10:0:0:0]   disk    WD       Ext HDD 1021     2021  /dev/sde 
[11:0:0:0]   disk    WD       Ext HDD 1021     2021  /dev/sdf 

I had a lot of unused space on /dev/sdb, and thought I could rearrange the partitions and use it as an additional device in my (software) RAID.

I used Parted1, but GParted2 is also very nice. GParted can also scan my /dev/md127, which is a raid device.

Of course there's the general problem nowadays of looking what kind of MBR/GPT or whatever boot schema you use.

So:

Disk /dev/sdb: 3.64 TiB, 4000787030016 bytes, 7814037168 sectors
Disk model: WDC WD4003FZEX-0
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: D344E388-FA66-4B30-BA00-4D3B12A07B79
Device          Start        End    Sectors  Size Type
/dev/sdb1        2048       4095       2048    1M BIOS boot
/dev/sdb2        4096    1028095    1024000  500M Linux filesystem
/dev/sdb3     1028096 5273437500 5272409405  2.5T Linux filesystem
/dev/sdb4  5273438208 7420921855 2147483648    1T Linux filesystem

So it's a 4 TB hard drive, and it turns out that I only use /dev/sdb3. (The other partitions are from an old Linux install before I got a nice SDD and reinstalled Linux/Windows on that.)

So I could first rearrange it: resize /dev/sdb3 to 2T and move it to /dev/sdb1.

That worked fine using Parted. It just took some time to actually move all the data.

And then create a new /dev/sdb2 of 2T for my raid with gdisk3 (the GPT version of fdisk).

I had to look up the appropriate partition type in GPT4 for mdadm.

# gdisk /dev/sdb
GPT fdisk (gdisk) version 1.0.9

Partition table scan:
  MBR: protective
  BSD: not present
  APM: not present
  GPT: present

Found valid GPT with protective MBR; using GPT.
Command (? for help): n
Partition number (2-128, default 2): 
First sector (34-7814037134, default = 3907006464) or {+-}size{KMGTP}: 
Last sector (3907006464-7814037134, default = 7814035455) or {+-}size{KMGTP}: 
Current type is 8300 (Linux filesystem)


Hex code or GUID (L to show codes, Enter = 8300): fd00
Changed type of partition to 'Linux RAID'

Command (? for help): p
Disk /dev/sdb: 7814037168 sectors, 3.6 TiB
Model: WDC WD4003FZEX-0
Sector size (logical/physical): 512/4096 bytes
Disk identifier (GUID): D344E388-FA66-4B30-BA00-4D3B12A07B79
Partition table holds up to 128 entries
Main partition table begins at sector 2 and ends at sector 33
First usable sector is 34, last usable sector is 7814037134
Partitions will be aligned on 2048-sector boundaries
Total free space is 3693 sectors (1.8 MiB)

Number  Start (sector)    End (sector)  Size       Code  Name
   1            2048      3907006463   1.8 TiB     8300  
   2      3907006464      7814035455   1.8 TiB     FD00  Linux RAID

Adding the device

Current raid setup:

# mdadm --assemble /dev/md127 /dev/sdd1 /dev/sde1 /dev/sdf1
mdadm: /dev/md127 has been started with 3 drives.

# mdadm --detail /dev/md127
/dev/md127:
           Version : 1.2
     Creation Time : Wed Mar  6 22:16:05 2013
        Raid Level : raid1
        Array Size : 1953380160 (1862.89 GiB 2000.26 GB)
     Used Dev Size : 1953380160 (1862.89 GiB 2000.26 GB)
      Raid Devices : 3
     Total Devices : 3
       Persistence : Superblock is persistent

       Update Time : Wed Aug 16 09:08:07 2023
             State : clean 
    Active Devices : 3
   Working Devices : 3
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : resync

              Name : micemouse:0
              UUID : ed4531c4:59c132b2:a6bfc3d1:6da3b928
            Events : 7285

    Number   Major   Minor   RaidDevice State
       4       8       65        0      active sync   /dev/sde1
       5       8       81        1      active sync   /dev/sdf1
       3       8       49        2      active sync   /dev/sdd1
# cat /proc/mdstat
Personalities : [raid1] 
md127 : active raid1 sde1[4] sdd1[3] sdf1[5]
      1953380160 blocks super 1.2 [3/3] [UUU]
      
unused devices: <none>

Adding a device:

mdadm --add /dev/md127 /dev/sdb2
mdadm: added /dev/sdb2
  
md127 : active raid1 sdb2[6](S) sde1[4] sdd1[3] sdf1[5]
      1953380160 blocks super 1.2 [3/3] [UUU]
      
unused devices: <none>
 mdadm --detail /dev/md127
/dev/md127:
           Version : 1.2
     Creation Time : Wed Mar  6 22:16:05 2013
        Raid Level : raid1
        Array Size : 1953380160 (1862.89 GiB 2000.26 GB)
     Used Dev Size : 1953380160 (1862.89 GiB 2000.26 GB)
      Raid Devices : 3
     Total Devices : 4
       Persistence : Superblock is persistent

       Update Time : Sat Aug 19 16:47:58 2023
             State : clean 
    Active Devices : 3
   Working Devices : 4
    Failed Devices : 0
     Spare Devices : 1

Consistency Policy : resync

              Name : micemouse:0
              UUID : ed4531c4:59c132b2:a6bfc3d1:6da3b928
            Events : 7286

    Number   Major   Minor   RaidDevice State
       4       8       65        0      active sync   /dev/sde1
       5       8       81        1      active sync   /dev/sdf1
       3       8       49        2      active sync   /dev/sdd1

       6       8       18        -      spare   /dev/sdb2
[
root@localhost ~]# mdadm --grow /dev/md127 --raid-devices=4
raid_disks for /dev/md127 set to 4

  # mdadm --detail /dev/md127
/dev/md127:
           Version : 1.2
     Creation Time : Wed Mar  6 22:16:05 2013
        Raid Level : raid1
        Array Size : 1953380160 (1862.89 GiB 2000.26 GB)
     Used Dev Size : 1953380160 (1862.89 GiB 2000.26 GB)
      Raid Devices : 4
     Total Devices : 4
       Persistence : Superblock is persistent

       Update Time : Sat Aug 19 16:49:32 2023
             State : clean, degraded, recovering 
    Active Devices : 3
   Working Devices : 4
    Failed Devices : 0
     Spare Devices : 1

Consistency Policy : resync

    Rebuild Status : 0% complete

              Name : micemouse:0
              UUID : ed4531c4:59c132b2:a6bfc3d1:6da3b928
            Events : 7290

    Number   Major   Minor   RaidDevice State
       4       8       65        0      active sync   /dev/sde1
       5       8       81        1      active sync   /dev/sdf1
       3       8       49        2      active sync   /dev/sdd1
       6       8       18        3      spare rebuilding   /dev/sdb2

And now we wait.

# cat /proc/mdstat
Personalities : [raid1] 
md127 : active raid1 sdb2[6] sde1[4] sdd1[3] sdf1[5]
      1953380160 blocks super 1.2 [4/3] [UUU_]
      [>....................]  recovery =  0.1% (2956032/1953380160) finish=1156.0min speed=28118K/sec
      
unused devices: <none>

Ready in about 12 hours.

References

[1] Parted User’s Manual
https://www.gnu.org/software/parted/manual/parted.html
[2] GParted is a free partition editor for graphically managing your disk partitions.
https://gparted.org/
[3] gdisk(8) - Linux man page
https://linux.die.net/man/8/gdisk/
[4] Raid Wiki Kernel Org - Partition Types
https://raid.wiki.kernel.org/index.php/Partition_Types
WP Guru - How to add a drive to software RAID with mdadm
https://wpguru.co.uk/2021/01/expand-software-raid-mdadm/

Thursday 9 November 2023

The Java Playground

Apparently there's an official Java Playground on https://dev.java/.

They are working on using the Playground in your own webpages.

It is primarily created for developers to play around with the new Java, without having to install the new java.

But it's early and they have plans for it.

References

dev.java - The Java Playground
https://dev.java/playground/
dev.java
https://dev.java/

Thursday 2 November 2023

How to Change a Git Remote

I needed to change a remote, as the ip address of the remote server had changed.

$ git remote -v
origin ssh://mrbear@192.168.2.1:/home/mrbear/store (fetch)
origin ssh://mrbear@192.168.2.1:/home/mrbear/store (push)

So to change it:

git remote set-url origin ssh://mrbear@192.168.2.7:/home/mrbear/store

And it becomes:

$ git remote -v
origin ssh://mrbear@192.168.2.7:/home/mrbear/store (fetch)
origin ssh://mrbear@192.168.2.7:/home/mrbear/store (push)

References

CareerKarma - How to Change a Git Remote
https://careerkarma.com/blog/git-change-remote/

Thursday 26 October 2023

Trying my first JavaFX Program

So, I've looked up some websites on JavaFX as I wanted to quickly throw something nice together for drawing Geometry.

You can find my little tool on the GitHub1.

It works fine for what I needed. I wanted to be able to quickly show the difference between geometries.

The tool uses locationtech for geometry stuff, and javafx for display stuff.

It accepts WKT representations of geometry. Currently, these do not provide any coordinate systems, so you're unlucky there.

I did get the error:

Error: JavaFX runtime components are missing, and are required to run this application

But I changed my app to pull JavaFX from Maven Central and that seems to fixed the issues. JavaFX is no longer included in any JDK.

References

[1] GitHub.com - geoviewer
https://github.com/maartenl/geoviewer
JavaFX - Getting Started with JavaFX
https://openjfx.io/openjfx-docs/#maven
developer.com - Using Graphics in JavaFX
https://www.developer.com/open-source/using-graphics-in-javafx/
JavaFX - Main website
https://openjfx.io/

Thursday 19 October 2023

Combining two collections using streams

So, I have two collections, and I wish to combine both lists somehow.

Requirements are thusly:

  • I have a collection newPersons
  • I have a collection oldPersons
  • I want a collection containing all the oldPersons but replaced (*some of) the oldPersons with the newPersons (based on id).
public record Person(Long id, String name, String surname) {
}

Using the record class above.

Solution 1.

Create a new list based on oldPersons, replace items in this new list with equivalent items in newPersons.

This solution leaves much to be desired. It's confusing and error prone.

I was looking for something better.

  public List<Person> mergeMetadata(List<Person> newPersons,
      List<Person> oldPersons) {
    var result = new ArrayList<>(oldPersons);
    newPersons.forEach(newPerson -> {
      result.stream()
          .filter(person -> person.id().equals(newPerson.id()))
          .findFirst()
          .ifPresent(result::remove);
      result.add(newPerson);
    });
    return result;
  }

Solution 2.

Sometimes getting back to our roots using for-loops can help readability.

We could try it the other way around, see if that helps.

This time we create a new list based on newPersons and add an oldPerson if it's not already in the list.

This seems a little more clear.

  public List<Person> mergeMetadata(List<Person> newPersons,
      List<Person> oldPersons) {
    var result = new ArrayList<>(newPersons);
    for (Person oldPerson : oldPersons) {
      if (result.stream().noneMatch(x -> x.id().equals(oldPerson.id()))) {
        result.add(oldPerson);
      }
    }
    return result;
  }

Solution 3.

Merge two collections into one list, by using a map.

  public List<Person> mergeMetadata(List<Person> newPersons,
      List<Person> oldPersons) {
    Map<Long, Person> result = Stream
        .concat(newPersons.stream(), oldPersons.stream())
        .collect(Collectors.toMap(Person::id, Function.identity(), (l, r) -> l));
    return new ArrayList<>(result.values());
  }

Although this solution seems to be the shortest (in code), using a Map function can be a bit daunting (it was for me) because of inherent complexity in the method call for creating it.

Still, perhaps it's just me and my inexperience with combining maps and streams.

I don't know if there's an even better way. Will keep an eye out.

Thursday 12 October 2023

Optional Anti-Pattern

It might seem obvious, but recently I saw a colleague use Optional in a weird way.

When I see a piece of code that seems to contain about 3 or more Optional.isPresent() if checks and Optional.get() method calls, I kind of panic.

To be absolutely fair, the code wasn't ready yet, so was going to change anyways.

So, anyways, before we had Optionals, we had to deal with NULL a lot.

The way to do this was as follows:

  private boolean documentAccessAllowed() {
    Information information = session.getInformation();
    if (information != null) {
      Document document = database.getDocument(information.getDocumentnumber());
      if (document != null) {
        if (!document.getOwner().equals(user)) {
          Administrator administrator = database.getAdmininstrator(user.getName());
          if (administrator == null) {
            addErrorMessage("You are not allowed access to this document.");
            return false;
          }
        }
      }
    }
    return true;
  }

I realize this code could be refactored using "Replace Nested Conditional with Guard Clauses", but some purists frown upon multiple return statements in a method. I don't mind. But let's continue with the example as is.

Then we got Optionals, yay! But, translating the above code verbatim causes my head to explode:

  private boolean documentAccessAllowed() {
    Optional<Information> information = session.getInformation();
    if (information.isPresent()) {
      Optional<Document> document = database.getDocument(information.get().getDocumentnumber());
      if (document.isPresent()) {
        if (!document.get().getOwner().equals(user)) {
          Optional<Administrator> administrator = database.getAdmininstrator(user.getName());
          if (administrator.isEmpty()) {
            addErrorMessage("You are not allowed access to this document.");
            return false;
          }
        }
      }
    }
    return true;
  }

This code is of course hardly ideal!

A better way is to use the flatMap and map and filter functions of Optional, which seems more concise, but requires a bit of a mental adjustment.

  private boolean documentAccessAllowed() {
    Boolean allowed = session.getInformation()
        .map(Information::getDocumentnumber)
        .flatMap(database::getDocument)
        .filter(doc -> !doc.getOwner().equals(user))
        .filter(doc -> database.getAdmininstrator(user.getName()).isEmpty())
        .isEmpty();
    if (!allowed) {
      addErrorMessage("You are not allowed access to this document.");
    }
    return allowed;
  }

The advice of a colleague of mine is that lambdas, even slightly trivial ones, are a bit hard to read, and making it a method with a good name helps clear things up immensely, like so:

private boolean documentAccessAllowed() {
    boolean allowed = session.getInformation()
        .map(Information::getDocumentnumber)
        .flatMap(database::getDocument)
        .filter(doc -> !isOwner(user, doc))
        .filter(doc -> !isAdministrator(database, user))
        .isEmpty();
    if (!allowed) {
      addErrorMessage("You are not allowed access to this document.");
    }
    return allowed;
  }

The idea here is not to make the code short (although that most often helps) but make the code very easy to read and follow.

In the example above, there's a lot of use of negation, which I don't like either. (isEmpty(), !isAdmin, !isOwner) Sometimes shoehorning boolean expressions so you can put them in sequential filters is a bad idea. Perhaps the following is more clear:

  private boolean documentAccessAllowed(Session session, Database database, User user) {
    boolean allowed = session.getInformation()
        .map(Information::getDocumentnumber)
        .flatMap(database::getDocument)
        .filter(doc -> isOwnerOrAdministrator(database, user, doc))
        .isPresent();
    if (!allowed) {
      addErrorMessage("You are not allowed access to this document.");
    }
    return allowed;
  }

  private static boolean isOwnerOrAdministrator(Database database, User user, Document doc) {
    return isOwner(user, doc) || isAdministrator(database, user);
  }

Let me know if you have any suggestions, of which there are no doubt several, on how this can be improved in readability.