[00:50:31] *** Joins: tkulasek (~tkulasek@134.134.139.74) [03:15:32] *** Joins: destrudo_ (~destrudo@tomba.sonic.net) [03:20:34] *** Quits: destrudo (~destrudo@tomba.sonic.net) (*.net *.split) [03:59:23] bwalker: can we merge https://review.gerrithub.io/#/c/spdk/spdk/+/410271/? [04:42:32] *** Quits: dlw (~Thunderbi@114.255.44.143) (Ping timeout: 256 seconds) [05:38:06] *** Joins: pohly (~pohly@p54849CF1.dip0.t-ipconnect.de) [07:12:56] *** Joins: dlw (~Thunderbi@222.129.238.6) [07:46:26] *** Quits: dlw (~Thunderbi@222.129.238.6) (Ping timeout: 276 seconds) [08:24:45] *** Quits: gila (~gila@5ED74129.cm-7-8b.dynamic.ziggo.nl) (Quit: Textual IRC Client: www.textualapp.com) [08:58:03] *** Joins: DT1 (c6ba0105@gateway/web/freenode/ip.198.186.1.5) [09:02:14] is someone available to answer a question about nvme initiator discovery to a discovery controller? Using rpc.py to kick it off [09:30:07] sure DT1 - ask away [09:32:44] thanks. Using rpc.py construct_nvme_bdev and passing in the nqn of the discovery controller. First question, is this proper way of doing discovery? This is on 18.04 [09:33:11] what application are you sending the RPC to? [09:33:12] Note if I use the actual NQN of the target, it works fine. [09:37:08] This is my own app. The error happens after discovery completes. The discovery controller is probed and finds the NQN's on the target. It connects successfully to a NQN and creates the ctrlr. When it tries to create the bdev name (in bdev_nvme.c attach_cb) it calls spdk_nvme_transport_id_compare. One of strings compared is the nqn. It fails because the disc nqn != the discovered nqn. So the bdev is not created. [09:38:07] let me take a look at what we do with the discovery service in the bdev layer [09:40:33] Thanks. If I put a check in the transport_id_compare to skip the nqn check if trid2->subnqn is the disc controller it passes and creates the nqn. Then it calls the routine a 2nd time with trid1->subnqn as the disc controller and gives me an error. End result is the bdev is created, but the rpc reports an error back. [09:41:12] I don't think you can actually make a bdev out of the discover controller - the discovery controller doesn't expose any namespaces [09:41:15] but I'm looking now [09:43:57] I agree. But appears to be a disconnect between the initiator transport which recognizes discovery controller and does individual connects what is reported, and the rpc/bdev that is trying to connect to only what was passed in the rpc line. [09:46:51] are you trying to make it so that a bdev gets created for each NVMe-oF device from a particular discovery service? [09:48:16] the bdev layer doesn't comprehend discovery today - it can only direct connect [09:48:24] the nvme driver understands how to use a discovery service though [09:48:45] that's what I was hoping for. Users typically don't know nqn of target. Just sending a rpc to the discovery ctrlr on the target is a quick way to connect. [09:48:51] you can't even, for example, say to create a bdev for every local NVMe device on the PCIe bus [09:49:32] That is the disconnect between the 2. It seams like the bdev_nvme layer could be more aware of disc nqn's. [09:49:42] yeah, I agree [09:49:45] I think this is solvable [09:50:24] there is one main issue that needs to be worked out - when connecting to subsystems through discovery (whether local PCIe scanning or an NVMe-oF discovery service) [09:50:35] for each device found, we need to figure out what to name it in the bdev layer [09:51:06] right now the rpc just connects to a single subsystems and the user provides the name [09:51:35] I was surprised it called the id_compare twice. once with trid2=disc nqn, and once with trid1=disc nqn. [09:51:49] where are you seeing that code specifically? [09:51:57] the name is an issue. [09:53:29] lib/nvme/nvme.c spdk_nvme_transport_id_compare. called from lib/bdev/bdev_nvme.c attach_cb [09:55:55] actually not positive 2nd call is from attach_cb. need to trace it back [09:56:03] yeah I only see one call in attach_cb [09:56:10] it's in a loop, but the loop should be over a list of length 1 [09:56:46] I think the first call is from within the nvme driver itself [09:56:49] which is discovery service aware [09:57:23] so if you call spdk_nvme_probe with a transport id describing a discovery service, the nvme driver is smart enough to interpret that as scan the discovery service and probe each subsystem [09:58:12] but the bdev layer isn't aware of that - so on attaching to the controller is tries to verify that the nqn is the discovery service nqn [09:58:31] but the nvme driver never surfaces a discovery controller - it surfaces a controller for each device the discovery controller describes [09:59:04] so basically using the discovery nqn in the bdev layer is broken today - it's not aware of discovery as a mechanism [09:59:33] we could definitely add either a new rpc or overload the meaning of construct_nvme_bdev to make it discovery aware [10:00:11] I think the reason we don't do this today is because of the naming thing [10:01:53] The 2nd one comes from bdev_nvme.c in spdk_bdev_nvme_create. It calls nvme_ctrlr_get which does the transport_id_compare again. [10:07:09] thanks for the information. The naming is an issue. perhaps pass in a starting bdev name and just add an inc digit at the end. report back what was created. [10:08:23] yeah that's the only reasonable algorithm I can think of [10:08:58] I think a discussion on the mailing list would be worthwhile, if you're willing to start that [10:09:00] *** Quits: tkulasek (~tkulasek@134.134.139.74) (Ping timeout: 245 seconds) [10:09:27] sure. thanks for the help. [10:09:37] np [10:17:26] *** Joins: travis-ci (~travis-ci@ec2-54-92-224-122.compute-1.amazonaws.com) [10:17:27] (spdk/master) bdev: Begin encapsulating spdk_bdev_io (Ben Walker) [10:17:27] Diff URL: https://github.com/spdk/spdk/compare/c6ae008db5bb...a94accabff62 [10:17:27] *** Parts: travis-ci (~travis-ci@ec2-54-92-224-122.compute-1.amazonaws.com) () [10:18:21] drv: can you rebase https://review.gerrithub.io/#/c/spdk/spdk/+/413154/? [10:18:44] he's out today, but I can do it [10:18:51] oh yeah - forgot about that [10:18:53] slacker [10:18:58] *** Joins: travis-ci (~travis-ci@ec2-54-92-224-122.compute-1.amazonaws.com) [10:18:59] (spdk/master) nvmf: SGL support for NVMF RDMA Driver. (Srikanth kaligotla) [10:18:59] Diff URL: https://github.com/spdk/spdk/compare/a94accabff62...8580daa1ac0f [10:18:59] *** Parts: travis-ci (~travis-ci@ec2-54-92-224-122.compute-1.amazonaws.com) () [10:20:28] *** Joins: travis-ci (~travis-ci@ec2-54-81-40-192.compute-1.amazonaws.com) [10:20:29] (spdk/master) doc/lvol: add diagrams to clone-snapshot doc (Tomasz Kulasek) [10:20:29] Diff URL: https://github.com/spdk/spdk/compare/8580daa1ac0f...c9476f1b1b67 [10:20:29] *** Parts: travis-ci (~travis-ci@ec2-54-81-40-192.compute-1.amazonaws.com) () [10:21:52] *** Joins: travis-ci (~travis-ci@ec2-54-81-40-192.compute-1.amazonaws.com) [10:21:53] (spdk/master) include/nvmf.h: add comments for callback functions (Yanbo Zhou) [10:21:53] Diff URL: https://github.com/spdk/spdk/compare/c9476f1b1b67...ed97638ccd6d [10:21:53] *** Parts: travis-ci (~travis-ci@ec2-54-81-40-192.compute-1.amazonaws.com) () [10:23:32] *** Joins: travis-ci (~travis-ci@ec2-54-242-138-74.compute-1.amazonaws.com) [10:23:33] (spdk/master) nvmf: Quiesce I/O before closing spdk_nvmf_qpairs (Ben Walker) [10:23:33] Diff URL: https://github.com/spdk/spdk/compare/ed97638ccd6d...72800826ec90 [10:23:33] *** Parts: travis-ci (~travis-ci@ec2-54-242-138-74.compute-1.amazonaws.com) () [10:28:12] *** Joins: travis-ci (~travis-ci@ec2-54-242-138-74.compute-1.amazonaws.com) [10:28:13] (spdk/master) util/bit_array: add functions to count 0/1 bits (Daniel Verkamp) [10:28:13] Diff URL: https://github.com/spdk/spdk/compare/72800826ec90...a1c7c58f7137 [10:28:13] *** Parts: travis-ci (~travis-ci@ec2-54-242-138-74.compute-1.amazonaws.com) () [10:31:03] *** Joins: travis-ci (~travis-ci@ec2-54-242-138-74.compute-1.amazonaws.com) [10:31:04] (spdk/master) nvmf/rdma: monitor asynchronous events (Philipp Skadorov) [10:31:04] Diff URL: https://github.com/spdk/spdk/compare/a1c7c58f7137...b6f90c527ade [10:31:04] *** Parts: travis-ci (~travis-ci@ec2-54-242-138-74.compute-1.amazonaws.com) () [10:41:37] *** Joins: gila (~gila@5ED74129.cm-7-8b.dynamic.ziggo.nl) [10:50:50] DT1: I'll make your messages go through to the list - eventually mailman will update once you've subscribed and let you send messages without approval [10:53:52] thanks. I wasn't subscribed on gmail. [12:11:58] *** Joins: travis-ci (~travis-ci@ec2-54-242-138-74.compute-1.amazonaws.com) [12:11:59] (spdk/master) test/vhost: Add no-pci option and fix vhost live migration tc2 & tc3 (Pawel Niedzwiecki) [12:11:59] Diff URL: https://github.com/spdk/spdk/compare/b6f90c527ade...c25adb84449d [12:11:59] *** Parts: travis-ci (~travis-ci@ec2-54-242-138-74.compute-1.amazonaws.com) () [12:12:46] *** Joins: travis-ci (~travis-ci@ec2-54-92-224-122.compute-1.amazonaws.com) [12:12:47] (spdk/master) autotest: Adding qos test case under the spdk/test/iscsi_tgt directory. (chenlo2x) [12:12:48] Diff URL: https://github.com/spdk/spdk/compare/c25adb84449d...1559a7cdaaef [12:12:48] *** Parts: travis-ci (~travis-ci@ec2-54-92-224-122.compute-1.amazonaws.com) () [14:06:40] *** Joins: johnmeneghini (~johnmeneg@pool-100-0-53-181.bstnma.fios.verizon.net) [14:06:43] *** Quits: johnmeneghini (~johnmeneg@pool-100-0-53-181.bstnma.fios.verizon.net) (Client Quit) [14:16:37] *** Quits: pohly (~pohly@p54849CF1.dip0.t-ipconnect.de) (Quit: Leaving.) [14:41:30] bwalker: do you want to re-add your +2 on https://review.gerrithub.io/#/c/spdk/spdk/+/413154/ after the rebase? [14:41:58] done [16:04:04] *** Quits: DT1 (c6ba0105@gateway/web/freenode/ip.198.186.1.5) (Ping timeout: 260 seconds) [16:14:20] *** Joins: Shuhei (caf6fc61@gateway/web/freenode/ip.202.246.252.97) [17:47:20] *** Quits: Shuhei (caf6fc61@gateway/web/freenode/ip.202.246.252.97) (Ping timeout: 260 seconds) [17:55:06] *** Joins: Shuhei (caf6fc61@gateway/web/freenode/ip.202.246.252.97) [18:13:24] Hi all, is there anyone who have used VTune to collect stats when SPDK nvmf-tgt runs? [18:18:25] I will try to use perfmon in parallel with VTune. [18:23:24] sorry it was perf. [18:32:22] it looks that perf command worked during nvmf-tgt runs. Hence it's OK for me for now. Thanks. [18:36:58] *** Joins: dlw (~Thunderbi@114.255.44.143) [19:04:42] *** Joins: lhodev_ (~lhodev@inet-hqmc02-o.oracle.com) [19:06:23] *** Quits: lhodev (~lhodev@66-90-218-190.dyn.grandenetworks.net) (Ping timeout: 256 seconds) [20:16:24] *** Quits: guerby (~guerby@april/board/guerby) (Ping timeout: 256 seconds) [22:13:17] /back [23:04:06] *** Joins: pohly (~pohly@p54849CF1.dip0.t-ipconnect.de)