[00:52:12] *** Joins: vmysak (~vmysak@192.55.54.40) [01:41:18] *** Joins: travis-ci (~travis-ci@ec2-54-146-150-26.compute-1.amazonaws.com) [01:41:19] (spdk/master) CHANGELOG: add DPDK 18.11 (Darek Stojaczyk) [01:41:20] Diff URL: https://github.com/spdk/spdk/compare/ab696ae1cfb9...a2e522a3c577 [01:41:20] *** Parts: travis-ci (~travis-ci@ec2-54-146-150-26.compute-1.amazonaws.com) () [01:43:35] *** Joins: tomzawadzki (uid327004@gateway/web/irccloud.com/x-thoiqjbzcihowpzn) [01:50:10] *** Joins: travis-ci (~travis-ci@ec2-54-167-54-101.compute-1.amazonaws.com) [01:50:11] (spdk/master) config_converter: fix incorrect indentation (Karol Latecki) [01:50:12] Diff URL: https://github.com/spdk/spdk/compare/a2e522a3c577...6a427c748103 [01:50:12] *** Parts: travis-ci (~travis-ci@ec2-54-167-54-101.compute-1.amazonaws.com) () [02:07:52] *** Joins: pniedzwx (pniedzwx@nat/intel/x-pjslkzfyssishchf) [03:31:56] *** Joins: gila (~gila@5ED4D979.cm-7-5d.dynamic.ziggo.nl) [03:35:57] *** Quits: gila (~gila@5ED4D979.cm-7-5d.dynamic.ziggo.nl) (Client Quit) [03:38:10] *** Joins: travis-ci (~travis-ci@ec2-54-161-143-249.compute-1.amazonaws.com) [03:38:11] (spdk/master) spdkcli: Fix: catch exceptions for spdkcli commands. (Pawel Kaminski) [03:38:11] Diff URL: https://github.com/spdk/spdk/compare/6a427c748103...223810e95b24 [03:38:11] *** Parts: travis-ci (~travis-ci@ec2-54-161-143-249.compute-1.amazonaws.com) () [03:43:42] *** Joins: travis-ci (~travis-ci@ec2-54-197-114-131.compute-1.amazonaws.com) [03:43:43] (spdk/master) spdkcli: Fix: find nvme ctrlr first when delete nvme (Pawel Kaminski) [03:43:44] Diff URL: https://github.com/spdk/spdk/compare/223810e95b24...ad0b3c974f7b [03:43:44] *** Parts: travis-ci (~travis-ci@ec2-54-197-114-131.compute-1.amazonaws.com) () [06:41:08] *** Quits: guerby (~guerby@april/board/guerby) (Remote host closed the connection) [06:44:13] *** Joins: guerby (~guerby@april/board/guerby) [07:12:09] *** Joins: spdk-jenkins-bot (c0c6972b@gateway/web/freenode/ip.192.198.151.43) [07:12:42] *** Quits: spdk-jenkins-bot (c0c6972b@gateway/web/freenode/ip.192.198.151.43) (Client Quit) [07:42:07] test [08:16:36] *** Parts: klateck (~klateck@134.134.139.72) ("Leaving") [08:20:11] hmmm, this patch from karol seems to fix an issue I have on a patch in the CH TP but only on CentOS. Is this somehow not affecting everyone? https://review.gerrithub.io/c/spdk/spdk/+/442694 [08:22:52] *** Joins: klateck (~klateck@134.134.139.72) [08:32:38] *** Joins: spdk-jenkins-bot (~spdk-jenk@134.134.139.72) [08:32:53] TEST [08:33:43] Test again. Sorry guts, I will spam a little [08:33:49] guys* [08:57:07] *** Quits: vmysak (~vmysak@192.55.54.40) (Ping timeout: 240 seconds) [09:27:05] *** bwalker sets mode: +o tomzawadzki [10:03:22] klateck: i think you can remove the "spdk-" prefix from the bot name [10:17:33] sweet [11:03:08] *** Joins: travis-ci (~travis-ci@ec2-54-158-161-105.compute-1.amazonaws.com) [11:03:09] (spdk/master) jsonrpc.md: remove set_bdev_qos_limit_iops (Darek Stojaczyk) [11:03:09] Diff URL: https://github.com/spdk/spdk/compare/ad0b3c974f7b...0c21aa1a0e85 [11:03:09] *** Parts: travis-ci (~travis-ci@ec2-54-158-161-105.compute-1.amazonaws.com) () [11:07:49] qos and unmap/write_zeroes [11:08:11] i think we need to keep unmap/write_zeroes under the write qos limit [11:08:49] agree - at least write zeroes is basically a write [11:08:57] 1) we know that unmap operations have an effect on the underlying SSD, meaning if the SSD is shared between multiple lvols, we shouldn't let one lvol be able to arbitrarily unmap [11:09:32] 2) write_zeroes may actually results in physically writing zeroes instead of sending a real write_zeroes command (i.e. Optane SSDs) [11:09:54] I just did the napkin math and I think we average d1000 lines of code change per business day for this quarter. [11:10:12] like on master - not counting thrash on reviews [11:10:29] i'm thinking the administrator needs to set the limit sensibly - for example, if we're assigning a 500GB to a VM, should we be limiting the writes to 50MB/s? [11:11:01] I think we should leave everything to the administrators [11:33:54] i'm thinking we need a separate limit for unmap/write_zeroes [11:36:36] i'm also wondering if we should be waiting to complete I/Os that are bigger than what's allowed in the current timeslice, until the allotment gets back to 0 [11:37:29] here's what's happening - mkfs.ext4 sends a huge unmap command (1GB), let's say I've set the limit to 10MB/s [11:38:03] the unmap is allowed to get executed, but the bdev qos then operates at a deficit - it won't allow any more I/O to get executed for about 100 seconds (1GB / 10MB/s) [11:38:43] but the unmap command gets completed far sooner than 100 seconds - mkfs.ext4 gets the unmap completion and then starts writing superblocks and other accounting information [11:39:09] but now these writes are stuck in the queue for 100 seconds - but mkfs.ext4 wouldn't have sent those writes if we'd waiting to complete that 1GB unmap IO [11:41:49] but what if it's the case that it was the last I/O you were sending [11:41:58] you want that completion as quickly as possible [11:42:11] I think it's better to throttle the incoming commands than the completions for latency reasons [11:42:18] "you" meaning the VM owner, or the host administrator? [11:42:25] the VM owner [11:43:15] the VM owner issued a 1GB I/O with a limit of 10MB/s - they are still getting their IO completed within those limits [11:43:46] yes - but what if they only needed that 1GB I/O? Or if they can start working on something else when that 1GB I/O completes? [11:44:04] i.e. what if the next thing they want to do isn't I/O based [11:44:09] no need to make them wait [11:44:47] i don't think you can throttle the incoming commands [11:45:00] why not? becaues they'll timeout? [11:45:07] how would you throttle them? [11:45:25] by incoming commands I mean the writes after the big unmap [11:45:38] we throttle them in the QoS layer just like today [11:45:49] I think the way it works today is right, is what I'm saying [11:49:29] the way it's coded today, it lends to these types of timeouts [11:50:03] maybe it's a write_zeroes/unmap only thing though [11:50:45] if we link writes and unmaps in the same throttle, I can definitely see how we'd end up with write timeouts because a large unmap is blocking them [11:51:09] but I'm not sure that's bad - the actual disk will end up blocking the writes during the unmap too [11:52:35] it just seems wrong to complete the 1GB unmap as soon as it's completed - if it was sent as a bunch of 1MB unmaps, you'd get the last 1MB unmap completion much later [11:53:24] I agree from a throttling view it makes sense to hold the completion [11:53:37] because then you're completing at effectively the rate you promised [11:57:42] otherwise even if we add a separate unmap/write_zeroes limit, we'll hit a somewhat similar problem - although it might be a little less pronounced [11:58:56] i could have a 50GB volume, 100MB/s write QoS, 1GB/s unmap QoS - I trim the whole volume, send the completion back as soon as it's ready, mkfs starts sending writes which will be blocked for up to 50 seconds [12:10:30] *** Joins: travis-ci (~travis-ci@ec2-54-205-199-237.compute-1.amazonaws.com) [12:10:31] (spdk/master) nvmf: Don't increment current_recv_depth for dummy RECV (Ben Walker) [12:10:31] Diff URL: https://github.com/spdk/spdk/compare/0c21aa1a0e85...e1dd85a5b73e [12:10:31] *** Parts: travis-ci (~travis-ci@ec2-54-205-199-237.compute-1.amazonaws.com) () [12:33:11] *** Quits: tomzawadzki (uid327004@gateway/web/irccloud.com/x-thoiqjbzcihowpzn) (Quit: Connection closed for inactivity) [14:27:28] jimharris, FYI I'm going to finish all my rebasing and addressing typos/issues and then I'll go back and move things from patch to patch in the series if that's cool. re: compress vbdev [14:47:28] *** Joins: travis-ci (~travis-ci@ec2-34-227-194-131.compute-1.amazonaws.com) [14:47:29] (spdk/master) changelog: added missing items for 19.01 (Tomasz Zawadzki) [14:47:30] Diff URL: https://github.com/spdk/spdk/compare/e1dd85a5b73e...4a6f45520c2c [14:47:30] *** Parts: travis-ci (~travis-ci@ec2-34-227-194-131.compute-1.amazonaws.com) () [14:49:14] *** Joins: travis-ci (~travis-ci@ec2-184-73-34-183.compute-1.amazonaws.com) [14:49:15] (spdk/master) QoS: remove the limit on unmap kinds of I/O (GangCao) [14:49:15] Diff URL: https://github.com/spdk/spdk/compare/4a6f45520c2c...ce75af214046 [14:49:15] *** Parts: travis-ci (~travis-ci@ec2-184-73-34-183.compute-1.amazonaws.com) () [15:19:48] peluse: ok [15:19:58] bwalker: i don't see why your msg cache patch is failing [15:20:14] jimharris, gracias [15:21:13] I can't figure it out for the life of me either [15:21:28] I am running it locally with all of the same bdevs [15:21:31] and it passes just fine [15:21:42] crypto, pmdk, aio, etc. [15:21:43] all fine [15:22:14] I made sure I'm incrementing the count and decrementing the count in all the right spots [15:22:22] I don't see how that counter and the list can get out of sync [15:24:00] it's always the bdevio test that's failing? [15:31:08] yeah [15:59:09] there are somehow 3 reactors in bdevio there actually [16:01:37] oh that's just part of bdevio