[00:49:49] *** Joins: param (~param@157.49.237.44) [01:14:08] *** Quits: param (~param@157.49.237.44) (Quit: Going offline, see ya! (www.adiirc.com)) [01:14:46] *** Joins: param (~param@157.49.237.44) [01:17:33] *** Joins: tkulasek (tkulasek@nat/intel/x-vxqyrctehbwjisxm) [02:51:43] *** Quits: param (~param@157.49.237.44) (Quit: Going offline, see ya! (www.adiirc.com)) [02:52:20] *** Joins: param (~param@157.49.237.44) [02:52:45] *** Quits: param (~param@157.49.237.44) (Remote host closed the connection) [03:42:27] *** Quits: dlw (~Thunderbi@114.255.44.143) (Ping timeout: 240 seconds) [06:53:39] *** Joins: dlw (~Thunderbi@114.252.46.203) [07:18:36] *** Quits: dlw (~Thunderbi@114.252.46.203) (Quit: dlw) [09:06:27] drv or someone else: I try to create lvol store but always see some errors during unmap [09:06:40] eg rpc.py construct_lvol_store -c 1048576 Nvme0n1 Lvs1 [09:06:53] what kind of errors? [09:06:56] and what NVMe device? [09:08:03] nvme_qpair.c: 137:nvme_io_qpair_print_command: *NOTICE*: DATASET MANAGEMENT sqid:1 cid:190 nsid:1 [09:08:14] nvme_qpair.c: 283:nvme_qpair_print_completion: *NOTICE*: LBA OUT OF RANGE (00/80) sqid:1 cid:190 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 [09:09:17] SSDPEDMD400G4 [09:09:44] could you add a debug print in bdev_nvme_unmap() to verify that offset_blocks and num_blocks look OK there? [09:09:56] maybe there is a bug in the unmap splitting code, but it looks right to me at first glance [09:10:53] I did step by step and looks ok. [09:11:15] is there any limitation about maximum IO or unmap size? [09:11:44] each Dataset Management decriptor can describe 2^32-1 blocks, which is why we have this splitting [09:12:05] but I don't know of any other limit than that [09:13:44] it would be good to verify that the final range->starting_lba + range->length looks OK in that bdev_nvme_unmap() as well (make sure it's in range of the disk's number of blocks) [09:16:30] also worth checking if you have up-to-date SSD firmware - https://downloadcenter.intel.com/download/27666?v=t [09:19:33] checking [09:21:33] the ui is a bit clunky if you haven't used it before [09:21:54] basically you need to do 'isdct show -intelssd' to see the list of drives and whether they have firmware update, then 'isdct load -intelssd 0' to update drive 0, for example [09:21:55] bdev_nvme_unmap [09:22:01] it is using one range [09:22:06] offset_blocks + num_blocks = 97677846 [09:22:10] nbdev->disk.blockcnt = 97677846 [09:22:21] so this part is fine [09:22:28] will try to update firmware [09:22:31] ok, that looks good [09:43:54] hi KenneF - I saw your question yesterday. Are you looking to use an NVDIMM as a replacement for a block device? Or for some other purpose? [09:44:02] the answer for how to best use it depends on what you're doing with it [09:45:09] also peluse - I set up the znc thing. It's great! [09:55:15] new firmware fixed the issue, thx :/ [09:56:13] do you happen to know what the old firmware version was before you updated it? [09:56:24] that should be in the isdct show output if you still have it [09:57:55] pwodkowx - is your SSD formatted for 4KB instead of 512B by chance? [09:58:12] or rather was it before you updated firmware? [09:59:40] is is and was 4k [09:59:50] befor update firmware version was 8DV10171 [10:00:03] now it is 8DV101H0 [10:02:24] *** Quits: tkulasek (tkulasek@nat/intel/x-vxqyrctehbwjisxm) (Ping timeout: 260 seconds) [10:28:33] yeah - I ran into that before too, that older FW had a few issues with 4KB formatted namespaces [10:37:54] *** Joins: travis-ci (~travis-ci@ec2-54-80-146-36.compute-1.amazonaws.com) [10:37:55] (spdk/master) bdev/qos: add the QoS information on RPC get_bdevs interface (GangCao) [10:37:55] Diff URL: https://github.com/spdk/spdk/compare/83795a1600b9...b8681aa6594f [10:37:55] *** Parts: travis-ci (~travis-ci@ec2-54-80-146-36.compute-1.amazonaws.com) () [10:48:21] jimharris/bwalker: please take a look at https://review.gerrithub.io/#/c/395404/ (vhost nvme RPCs) [11:36:57] Hi @bwalker, I have an 8gb nvdimm I plan to use exclusively for power fail scenarios. Ideally, I would like to be able to flush it to ssd directly from the nvdimm. [11:37:25] I see spdk uses vtophys , which I assume relies on dpdk translation of huge page addresses [11:38:14] For me it would be easiest to obtain the physical offset of the nvdimm, and integrate it with translation map somehow [11:39:40] hmm, take a look at http://www.spdk.io/doc/env_8h.html#ae046c45de2849d15f29dedc52e915ad7 [11:39:47] it says it must map to pinned huge pages [11:40:01] but in reality, it just has to map to pinned 2MB regions [11:40:29] what are you using to allocate memory on the NVDIMM currently? [11:40:33] PMDK? [11:40:58] I'm not super familiar with how to program against NVDIMMs, but hopefully I know enough [11:43:44] It will be a kernel driver, I intend to mmap the entire region, and obtain the physical start [11:44:43] however the standard spdk api only accepts the physical start on write/read io [11:51:54] *** Quits: KenneF (cf8c2b51@gateway/web/freenode/ip.207.140.43.81) (Ping timeout: 260 seconds) [11:56:34] *** Joins: KenneF (cf8c2b51@gateway/web/freenode/ip.207.140.43.81) [11:59:18] bwalker: does calling spdk_mem_register on the NVDIMM region work? [11:59:43] oh sorry - that's what you linked to already [12:01:43] I assume on what type of check is done internally, if its checking whether the memory comes from an actually hugepage vs check if each page is contigous [12:05:08] *** Joins: igor__ (84ed9a7e@gateway/web/freenode/ip.132.237.154.126) [12:05:37] Hi all, if I have to add any GCC option to the make process, where should I do that? [12:09:40] igor__: you can set CFLAGS and LDFLAGS environment variables for C compiler and linker flags respectively [12:12:03] Thanks drv :) [12:12:59] these will also get picked up by configure, so you can do e.g. "CFLAGS='--my-cflags' ./configure" and it will add them automatically when you run make [12:15:03] ook.. [12:19:45] *** Joins: travis-ci (~travis-ci@ec2-54-81-80-255.compute-1.amazonaws.com) [12:19:46] (spdk/master) bdev: fix timing of init_complete callback (Paul Luse) [12:19:46] Diff URL: https://github.com/spdk/spdk/compare/cb1c88d19f0c...cbb8f4657564 [12:19:46] *** Parts: travis-ci (~travis-ci@ec2-54-81-80-255.compute-1.amazonaws.com) () [12:26:10] *** Joins: travis-ci (~travis-ci@ec2-54-81-80-255.compute-1.amazonaws.com) [12:26:11] (spdk/master) lvol: raport not supported io types on ro lvol (Tomasz Kulasek) [12:26:11] Diff URL: https://github.com/spdk/spdk/compare/cbb8f4657564...7fb0f7467cf0 [12:26:11] *** Parts: travis-ci (~travis-ci@ec2-54-81-80-255.compute-1.amazonaws.com) () [13:06:02] KenneF - if you have the virtual address after the mmap, and you know that it maps to a physical region that is visible in /proc/pagemap [13:06:06] then calling spdk_mem_register will just work [13:06:31] then you can pass pointers to your mmap'd region to the spdk calls, instead of using a pointer from spdk_dma_malloc [13:15:24] @bwalker thanks, I will give it a try! [13:16:42] jimharris: this one needs a quick review (nvme multiprocess timeout stuff): https://review.gerrithub.io/#/c/408403/ [13:18:05] KenneF you can also do crazier things like get the virtual address of a PCI BAR, pass it to spdk_mem_register [13:18:07] and then do DMAs to it [13:19:30] i think that patch is fine - although i'm wondering if we should move the pid check out of the admin queue check [13:19:53] and instead assert that if the pids don't match, that its the admin qpair [13:20:10] *** Joins: param (~param@157.49.192.3) [13:20:39] i added my +2 [13:20:59] bwalker : jim : drv : I have pushed the code. Pipleline has failed will check that. Can you pls check if the changes are ok https://review.gerrithub.io/#/c/405489/ [13:23:07] *** Quits: param (~param@157.49.192.3) (Client Quit) [13:23:41] *** Joins: param (~param@157.49.192.3) [13:25:20] param: it looks like the patch has an unintended dpdk submodule change in it - please be sure to 'git submodule update' when you rebase [13:25:36] also looks like the tests failed due to a crash in spdk_nvmf_rdma_poll_group_add() [13:27:02] *** Joins: travis-ci (~travis-ci@ec2-54-80-146-36.compute-1.amazonaws.com) [13:27:03] (spdk/master) nvme: Only check timeouts on requests from the same process (Ben Walker) [13:27:03] Diff URL: https://github.com/spdk/spdk/compare/7fb0f7467cf0...ddeaeeec193b [13:27:03] *** Parts: travis-ci (~travis-ci@ec2-54-80-146-36.compute-1.amazonaws.com) () [13:37:23] bwalker: can you take a look at peluse's env patch? https://review.gerrithub.io/#/c/408256/ [13:38:40] I'm not opposed necessarily, but why do we need this? [13:38:46] for crypto stuff, apparently [13:38:55] can't we implement anything that calls the bulk version with stuff that calls the regular version in a loop? [13:39:12] well, I think the bulk get either atomically gets all the requested ones or none [13:39:32] is that required for crypto? [13:39:35] plus we already have the bulk put(), so this is at least consistent [13:39:42] oh, we already have the put [13:39:49] then we should definitely have the get [13:45:08] *** Joins: travis-ci (~travis-ci@ec2-54-162-158-152.compute-1.amazonaws.com) [13:45:09] (spdk/master) env: Add SPDK wrapper for rte_mempool_get_bulk() (Paul Luse) [13:45:09] Diff URL: https://github.com/spdk/spdk/compare/ddeaeeec193b...2536874e85e7 [13:45:09] *** Parts: travis-ci (~travis-ci@ec2-54-162-158-152.compute-1.amazonaws.com) () [13:59:03] *** Quits: param (~param@157.49.192.3) (Quit: Going offline, see ya! (www.adiirc.com)) [14:05:47] jimharris: are you ok with merging the network ns patch for iSCSI? https://review.gerrithub.io/#/c/405641/ [14:06:08] you mentioned that you wanted to re-run it, and it looks like Tomek re-ran it a bunch [14:07:21] and it failed twice [14:07:27] in iscsi related tests [14:08:00] hmm, good point [14:08:08] I didn't notice some of those were failed [14:12:42] can you take a look at that bash snippet i just sent you via e-mail? [14:13:39] *** Quits: KenneF (cf8c2b51@gateway/web/freenode/ip.207.140.43.81) (Ping timeout: 260 seconds) [14:21:56] *** Joins: travis-ci (~travis-ci@ec2-54-80-146-36.compute-1.amazonaws.com) [14:21:57] (spdk/master) lvol/doc: update lvol documentation (Maciej Szwed) [14:21:57] Diff URL: https://github.com/spdk/spdk/compare/2536874e85e7...897bb3ac7f3f [14:21:57] *** Parts: travis-ci (~travis-ci@ec2-54-80-146-36.compute-1.amazonaws.com) () [14:27:51] *** Joins: travis-ci (~travis-ci@ec2-54-80-146-36.compute-1.amazonaws.com) [14:27:52] (spdk/master) test/rocksdb: move nightly test case to RUN_NIGHTLY_FAILING. (Pawel Niedzwiecki) [14:27:52] Diff URL: https://github.com/spdk/spdk/compare/897bb3ac7f3f...cb2d8466dc2e [14:27:52] *** Parts: travis-ci (~travis-ci@ec2-54-80-146-36.compute-1.amazonaws.com) () [14:52:12] *** Joins: travis-ci (~travis-ci@ec2-54-80-146-36.compute-1.amazonaws.com) [14:52:13] (spdk/master) test/common: Fix flamegraph typo in vm_setup (Seth Howell) [14:52:13] Diff URL: https://github.com/spdk/spdk/compare/cb2d8466dc2e...fc0dc65adc24 [14:52:13] *** Parts: travis-ci (~travis-ci@ec2-54-80-146-36.compute-1.amazonaws.com) () [15:29:23] *** Quits: lhodev (~lhodev@66-90-218-190.dyn.grandenetworks.net) (Quit: Textual IRC Client: www.textualapp.com) [15:30:24] looks like GerritHub is doing some kind of DB migration [16:46:04] hmm, the test pool has picked up a bunch of old reviews - did these get un-abandoned or something? [16:46:21] or maybe they were drafts before? [16:46:42] that's probably it - Gerrit 2.15 removed support for drafts [16:55:36] https://groups.google.com/forum/#!topic/repo-discuss/ZMf4tZiGbvE [17:00:24] well, now GerritHub seems to be totally down again