[02:01:11] *** Joins: gila (~gila@94-212-217-121.cable.dynamic.v4.ziggo.nl) [02:51:22] *** Quits: pawelkax (~pawelkax@134.134.139.83) (Quit: ZNC 1.7.0 - https://znc.in) [03:14:56] Project autotest-nightly build #476: STILL FAILING in 36 min. See https://dqtibwqq6s6ux.cloudfront.net for results. [04:03:17] Project autotest-nightly build #477: STILL FAILING in 23 min. See https://dqtibwqq6s6ux.cloudfront.net for results. [07:12:14] sethhowe: you're not going to like this: https://dqtibwqq6s6ux.cloudfront.net/results/autotest-per-patch/builds/29378/archive/nvmf_phy_autotest/build.log [07:39:50] *** Joins: travis-ci (~travis-ci@ec2-23-22-252-77.compute-1.amazonaws.com) [07:39:50] (spdk/master) nvme_manage: assert ns not being NULL when displaying namespaces (Tomasz Zawadzki) [07:39:50] Diff URL: https://github.com/spdk/spdk/compare/5a484d4ab82f...8bab99fb6eaf [07:39:50] *** Parts: travis-ci (~travis-ci@ec2-23-22-252-77.compute-1.amazonaws.com) () [07:49:10] darsto: My day just keeps getting better :) [07:49:48] sethhowe: rather than reverting that whole patch - can you just push a patch that just disables the overwhelm test for now? [07:50:10] ^ oh yes that was my concern as well [07:51:16] jimharris: will do. I know the change is sound. Although the last log Darek pointed to was in the shutdown tests, not the overwhelm tests. [07:57:03] jimharris: https://review.gerrithub.io/c/spdk/spdk/+/451998 [08:27:48] thanks! [08:28:02] do we have ASAN enabled/available on some systems but not others? [08:28:12] I don't understand how the rocksdb patch passed [08:46:05] *** Joins: markru1 (6047c201@gateway/web/freenode/ip.96.71.194.1) [09:46:33] *** Joins: travis-ci (~travis-ci@ec2-3-89-182-214.compute-1.amazonaws.com) [09:46:34] (spdk/master) test/nvmf: disable overwhelm testing for now. (Seth Howell) [09:46:34] Diff URL: https://github.com/spdk/spdk/compare/8bab99fb6eaf...84c550f3cb7e [09:46:34] *** Parts: travis-ci (~travis-ci@ec2-3-89-182-214.compute-1.amazonaws.com) () [10:09:41] klateck, was just looking at last night's nightly test failures... nvme_autotest failed with a java.IO exception so we don't have any other logs beyond the console output, so I looked at the history of nvme_autotest spawned from nightly and there's lots of red. Look at the same test spawned from per patch, lots of green. Anything you can think of that might be different about these two subjobs? [10:17:03] @jimharris shuhei mentioned #744 (and hopefully #725) are already merged in master and one and also recommended that we consider upgrading to 19.04 from 19.01. I think we're very happy to try upgrading rather than requiring backports to the version we're on, but I don't see 19.04 yet on the github releases page. Is 19.04 still in progress, and if so is there a release candidate commit that we could start with now that you [10:17:03] would recommend we start with? [10:52:46] *** Joins: emce (0fcbe230@gateway/web/freenode/ip.15.203.226.48) [10:53:51] hi jrlusby - 19.04 is not released yet - it should be released early/mid next week though [10:54:17] alright [10:54:25] Hi all, currently looking into raid bdevs via the spdkcli.py. Is there any current effort towards putting in support? [10:54:34] is the current master going to be part of the merge or is there a specific cutoff release candidate? [10:55:00] 19.04 will be tagged directly from master [10:55:03] the release is always whatever master is at the time of release [10:58:47] *** Quits: emce (0fcbe230@gateway/web/freenode/ip.15.203.226.48) (Quit: Page closed) [10:59:31] *** Joins: emce (~emce@15.203.226.48) [11:08:51] *** Quits: markru1 (6047c201@gateway/web/freenode/ip.96.71.194.1) (Ping timeout: 256 seconds) [11:13:21] jimharris: ASAN is only enable on some systems only for some particular tests [11:13:35] *enabled [11:16:18] *** Quits: emce (~emce@15.203.226.48) (Ping timeout: 250 seconds) [11:25:17] *** Joins: travis-ci (~travis-ci@ec2-3-90-78-29.compute-1.amazonaws.com) [11:25:17] (spdk/master) Opal: Add revert tper cmd option (Chunyang Hui) [11:25:17] Diff URL: https://github.com/spdk/spdk/compare/84c550f3cb7e...6b48e743a356 [11:25:17] *** Parts: travis-ci (~travis-ci@ec2-3-90-78-29.compute-1.amazonaws.com) () [11:25:55] *** Joins: emce (~emce@15.203.226.47) [11:30:10] *** Quits: emce (~emce@15.203.226.47) (Ping timeout: 246 seconds) [11:42:37] *** Joins: emce (~Mutter@173.239.198.47) [11:48:33] *** Quits: emce (~Mutter@173.239.198.47) (Quit: Mutter: www.mutterirc.com) [12:39:55] Question regarding output of SPDK's identify when targeting a NVM subsystem (NVMe over Fabric): [12:40:18] In spite of having configured a couple malloc bdevs, the identify shows the number of namespaces as 0. Is this expected? [12:41:00] did you do the rpc to explicitly add those bdevs as namespaces to the subsystem? [12:41:17] spdk_nvmf_subsystem_add_ns or something like that [12:41:39] I started up the target with a config file (instead of using RPCs). [12:42:39] in the [Subsystem] sections do you have namespaces defined? [12:44:07] In the [Subsystem1], I have MaxNamespaces set to 8, and then two namespaces: "Namespace Malloc0 1" and "Namespace Malloc1 2". [12:44:50] and when you do a get_bdevs rpc, do you see Malloc0 and Malloc1? [12:46:02] Yes [12:46:33] are you able to connect to the target and otherwise do I/O to those namespaces? [12:46:40] basically, is it just identify that's wrong? [12:48:42] Yes, perf works fine sending I/O to them. [12:49:01] interesting [12:49:06] I first noticed this in our SPDK 18.10.2 build. I then tried it with a top-of-tree and noticed the same thing. [12:49:51] showing 0 namespaces is definitely not right, if perf finds them and works [12:51:25] perf works whether or not I specify a 'ns:X' argument; i.e. it does the right thing, either directing I/O to just the specified namespace, or to all of them. [12:52:18] yeah so perf is definitely finding them [12:52:23] If I try an invalid namespace id, e.g. 'ns:3', then it fails with a message about that as one would expect. [12:52:29] which leads me to believe it might be an identify problem [12:56:37] Shall I log an issue, or would you like to try this out on a setup of your own first? [12:57:18] go ahead and log it [12:58:06] Ok. I'll see if I can spend a little time looking at the code if possible. It's just that I'm so buried in a bazillion other things right now. [13:09:08] *** Joins: travis-ci (~travis-ci@ec2-52-5-34-211.compute-1.amazonaws.com) [13:09:09] (spdk/master) test/rocksdb: move the location of DB_BENCH_DIR (Seth Howell) [13:09:09] Diff URL: https://github.com/spdk/spdk/compare/6b48e743a356...df04be2e5366 [13:09:09] *** Parts: travis-ci (~travis-ci@ec2-52-5-34-211.compute-1.amazonaws.com) () [15:21:28] *** Joins: Shuhei (caf6fc61@gateway/web/freenode/ip.202.246.252.97) [15:23:53] *** Quits: mszwed (mszwed@nat/intel/x-zopnhjslfimvotxp) (Remote host closed the connection) [15:52:29] *** Joins: travis-ci (~travis-ci@ec2-23-22-252-77.compute-1.amazonaws.com) [15:52:29] (spdk/master) blob: Don't reallocate cluster array if it didn't change size. (Ben Walker) [15:52:29] Diff URL: https://github.com/spdk/spdk/compare/df04be2e5366...795134891e85 [15:52:29] *** Parts: travis-ci (~travis-ci@ec2-23-22-252-77.compute-1.amazonaws.com) () [15:57:14] *** Joins: travis-ci (~travis-ci@ec2-34-207-127-127.compute-1.amazonaws.com) [15:57:14] (spdk/master) subsystem: check for NULL bufs in reservation ops. (Seth Howell) [15:57:14] Diff URL: https://github.com/spdk/spdk/compare/795134891e85...3856d82b50f1 [15:57:14] *** Parts: travis-ci (~travis-ci@ec2-34-207-127-127.compute-1.amazonaws.com) () [16:00:36] *** Joins: mszwed_ (~mszwed@192.55.54.38) [16:07:23] *** Quits: mszwed_ (~mszwed@192.55.54.38) (Remote host closed the connection) [16:25:44] *** Joins: mszwed_ (mszwed@nat/intel/x-vfdjoycvragqiscr) [16:54:39] *** Joins: emce (~emce@15.203.226.48) [17:39:39] *** Quits: emce (~emce@15.203.226.48) (Ping timeout: 246 seconds) [17:48:55] *** Quits: Shuhei (caf6fc61@gateway/web/freenode/ip.202.246.252.97) (Ping timeout: 256 seconds) [17:57:59] *** Joins: Shuhei (caf6fc61@gateway/web/freenode/ip.202.246.252.97) [20:16:15] *** Quits: Shuhei (caf6fc61@gateway/web/freenode/ip.202.246.252.97) (Ping timeout: 256 seconds) [22:01:40] Project autotest-nightly build #478: STILL FAILING in 1 min 39 sec. See https://dqtibwqq6s6ux.cloudfront.net for results. [22:25:59] Project autotest-nightly-failing build #340: STILL FAILING in 25 min. See https://dqtibwqq6s6ux.cloudfront.net for results. [23:51:31] *** Joins: pniedzwx_ (~pniedzwx_@host-185-93-94-178.ip-point.pl) [23:56:25] *** Quits: pniedzwx_ (~pniedzwx_@host-185-93-94-178.ip-point.pl) (Ping timeout: 258 seconds)