[01:26:55] *** Quits: Shuhei (caf6fc61@gateway/web/freenode/ip.202.246.252.97) (Ping timeout: 260 seconds) [02:02:57] *** Quits: darsto (~darsto@89-68-112-135.dynamic.chello.pl) (Ping timeout: 244 seconds) [02:08:07] *** Joins: darsto (~darsto@89-68-112-135.dynamic.chello.pl) [03:57:37] *** Quits: dlw (~Thunderbi@114.255.44.143) (Ping timeout: 248 seconds) [07:00:17] *** Quits: darsto (~darsto@89-68-112-135.dynamic.chello.pl) (Ping timeout: 255 seconds) [07:05:06] *** Joins: darsto (~darsto@89-68-112-135.dynamic.chello.pl) [07:05:40] *** Joins: johnmeneghini (~johnmeneg@pool-100-0-53-181.bstnma.fios.verizon.net) [07:09:51] *** Quits: johnmeneghini (~johnmeneg@pool-100-0-53-181.bstnma.fios.verizon.net) (Ping timeout: 244 seconds) [07:21:14] *** Joins: lhodev (~lhodev@66-90-218-190.dyn.grandenetworks.net) [10:34:42] *** Joins: travis-ci (~travis-ci@ec2-54-163-68-221.compute-1.amazonaws.com) [10:34:43] (spdk/master) rpc: Fix missing import for json exception (Pawel Kaminski) [10:34:43] Diff URL: https://github.com/spdk/spdk/compare/40b6f761b255...0054af55c9f9 [10:34:43] *** Parts: travis-ci (~travis-ci@ec2-54-163-68-221.compute-1.amazonaws.com) () [13:43:33] *** Quits: darsto (~darsto@89-68-112-135.dynamic.chello.pl) (Ping timeout: 244 seconds) [13:48:40] *** Joins: darsto (~darsto@89-68-112-135.dynamic.chello.pl) [14:20:11] *** Joins: bwalker (~bwalker@134.134.139.72) [14:20:11] *** Server sets mode: +cnt [14:20:11] *** Server sets mode: +cnt [14:20:14] for example, currently we zero out the payload member, and then just after we copy the payload structure passed in to the function [14:20:27] *** Quits: qdai2 (qdai2@nat/intel/x-qgwdbvpaxgqrcqxc) (Remote host closed the connection) [14:20:30] cleaning that up cuts out another 16ns in the submit path on my system [14:20:40] *** Quits: lgalkax (~lgalkax@134.134.139.72) (Ping timeout: 245 seconds) [14:20:49] *** Joins: gangcao (gangcao@nat/intel/x-kefiksqlvtqdppdc) [14:21:20] *** Joins: ziyeyang (~ziyeyang@134.134.139.72) [14:21:51] *** Joins: qdai2 (qdai2@nat/intel/x-veqxxsxevirqggve) [14:22:51] *** Joins: lgalkax (~lgalkax@134.134.139.72) [14:32:09] down to 276ns total (submission + completion) on my system - 2.1GHz => 580 core clocks [14:47:05] *** Joins: Clark__ (0cda5282@gateway/web/freenode/ip.12.218.82.130) [14:49:12] I get "rdma.c: 353:spdk_nvmf_rdma_qpair_initialize: *ERROR*: rdma_create_qp failed: errno 12: Cannot allocate memory rdma.c:1639:spdk_nvmf_rdma_poll_group_add: *ERROR*: Failed to initialize nvmf_rdma_qpair with qpair=0x1feae90" error when I try to initiate discovery over NVMeoF. I have 256GB free memory on setup. Any idea what could be wrong? [15:15:02] *** Joins: travis-ci (~travis-ci@ec2-54-163-68-221.compute-1.amazonaws.com) [15:15:03] (spdk/master) include/event.h: add comments for callback functions (Yanbo Zhou) [15:15:03] Diff URL: https://github.com/spdk/spdk/compare/0054af55c9f9...51806fa7b64c [15:15:03] *** Parts: travis-ci (~travis-ci@ec2-54-163-68-221.compute-1.amazonaws.com) () [15:16:11] Clark__: it looks like creating the RDMA queue is failing during connect; I don't think we've observed this in our tests [15:16:14] what kind of RDMA NIC do you have? [15:29:23] drv: please spend a few extra brain cycles on https://review.gerrithub.io/#/c/spdk/spdk/+/413153/ once you get to it [15:29:51] i'm pretty sure i've exhausted all of the cases where each of the nvme_request fields gets initialized but another pair of eyes wouldn't hurt [15:30:52] I'll take a look at it [15:36:45] jimharris: is payload_offset definitely initialized in all cases? it is used unconditionally in e.g. nvme_pcie_qpair_build_contig_request() but only set in _nvme_ns_cmd_rw() as far as I can see [15:37:44] (same for md_offset) [15:38:20] otherwise looks good to me [15:51:31] *** Joins: Shuhei (caf6fc61@gateway/web/freenode/ip.202.246.252.97) [15:51:35] drv: It is Mellanox ConnectX-5 [16:05:06] drv: thanks for that dpdk branch, i pushed some patches [16:07:03] there's also a dpdk bug/vulnerability i seem to have fixed. i should probably push it to dpdk before it's merged in our fork [16:07:15] but that's tomorrow. time to sleep now [16:07:36] darsto: are you back in Gdansk now? [16:21:27] yes [16:26:22] and i can finally breathe now. the pollution in beijing is trully hazardous [16:26:37] I wouldn't make another month in there [16:27:05] yeah - i remember how bad it looked flying into beijing [16:27:23] i think overall i didn't get the worst of it when i was there - but it was definitely noticeable [16:28:17] jimharris: if possible, will you add any comment on re-enabling iscsi hotplug https://github.com/spdk/spdk/issues/248 ? [16:29:54] it looks that Ziye has proposed more step-by-step approach than you. [16:30:21] And agreeing the policy between Ziye and you looks necessary if I understand correctly. [16:32:01] Ziye abandoned https://review.gerrithub.io/#/c/spdk/spdk/+/394700/ and https://review.gerrithub.io/#/c/spdk/spdk/+/412316/. [16:32:24] If you accept the foundation of them, he can restore them and go forward, I think. [16:43:06] hi Shuhei - yes, I will take a look at this [16:43:53] jimharris: thank you! [16:45:05] *** Quits: Clark__ (0cda5282@gateway/web/freenode/ip.12.218.82.130) (Ping timeout: 260 seconds) [16:52:27] jimharris: did you verify that you get the same perf improvement with the struct reordering? [16:52:41] it's maybe a 10ns instead of 12ns improvement [16:52:45] the change looks good, just questioning the code gen from gcc 8 :) [16:52:53] i haven't looked at that yet [16:53:10] it does 72 bytes worth of zeroing with rep stosq, then 7 bytes with three regular mov instructions [16:53:31] (because payload starts at offset 79) [16:53:34] right [16:53:51] not sure why it isn't just using rep stosb, since I thought that was the fastest on Haswell+ anyway, and we compile with -march=native [16:54:03] maybe the memset() builtin doesn't know about that [16:57:00] (actually even older than that, Ivy Bridge has it too) [17:07:05] *** Quits: darsto (~darsto@89-68-112-135.dynamic.chello.pl) (Ping timeout: 240 seconds) [17:11:13] alright, I've stared at it for a while and I can't come up with any clever tricks to get rid of the weird offset [17:11:38] i'm trying to figure out why bdevio fails on large write_zeroes in the test pool [17:11:44] i can't repro it on my system [17:11:53] unless we want to go extra crazy and stash the type into the low bits of a pointer or something :) [17:12:06] *** Joins: darsto (~darsto@89-68-112-135.dynamic.chello.pl) [17:12:12] hmm, with which patch? [17:12:24] your latest "optimize memsets" patch passed [17:12:54] it might have been fallout from not zeroing out the payload_offset? [17:13:18] doh! yes, I was looking at the old one [17:14:11] I'm actually mildly tempted to do this pointer/type stashing thing now that I said it [17:14:28] we only need to store two possible payload types (one bit) [17:14:48] which would let us get nvme_payload down to a nice even 32 bytes [17:17:17] maybe another day [18:15:56] *** Joins: dlw1 (~Thunderbi@114.255.44.143) [18:40:30] jimharris: ok, I got carried away and redid nvme_payload in a slightly different way (less hacky) in the middle of your patch series [19:43:34] *** Quits: Shuhei (caf6fc61@gateway/web/freenode/ip.202.246.252.97) (Ping timeout: 260 seconds) [20:44:47] *** Quits: sage_ (~quassel@2607:f298:5:101d:f816:3eff:fe21:1966) (*.net *.split) [20:57:17] *** Quits: darsto (~darsto@89-68-112-135.dynamic.chello.pl) (Ping timeout: 256 seconds) [20:57:34] *** Joins: darsto (~darsto@89-68-112-135.dynamic.chello.pl) [21:51:43] *** Quits: dlw1 (~Thunderbi@114.255.44.143) (Remote host closed the connection) [21:52:09] *** Joins: dlw (~Thunderbi@114.255.44.143) [23:04:17] *** Quits: darsto (~darsto@89-68-112-135.dynamic.chello.pl) (Ping timeout: 248 seconds) [23:09:08] *** Joins: darsto (~darsto@89-68-112-135.dynamic.chello.pl) [23:13:27] *** Joins: dlw1 (~Thunderbi@114.255.44.143) [23:14:57] *** Quits: dlw (~Thunderbi@114.255.44.143) (Ping timeout: 248 seconds) [23:14:57] *** dlw1 is now known as dlw [23:15:12] *** Quits: darsto (~darsto@89-68-112-135.dynamic.chello.pl) (Ping timeout: 245 seconds) [23:20:08] *** Joins: darsto (~darsto@89-68-112-135.dynamic.chello.pl) [23:33:05] *** Quits: darsto (~darsto@89-68-112-135.dynamic.chello.pl) (Ping timeout: 248 seconds) [23:38:05] *** Joins: darsto (~darsto@89-68-112-135.dynamic.chello.pl) [23:42:45] *** Joins: dlw1 (~Thunderbi@114.255.44.143) [23:44:27] *** Quits: dlw (~Thunderbi@114.255.44.143) (Ping timeout: 240 seconds) [23:44:28] *** dlw1 is now known as dlw