[01:10:14] *** Joins: tomzawadzki (uid327004@gateway/web/irccloud.com/x-hdsgueljuojnqtba) [02:21:37] *** Joins: gila (~gila@5ED4D979.cm-7-5d.dynamic.ziggo.nl) [02:41:50] *** Joins: felipef (~felipef@62.254.189.133) [02:42:07] *** Quits: felipef (~felipef@62.254.189.133) (Remote host closed the connection) [02:43:30] *** Joins: felipef (~felipef@62.254.189.133) [03:12:05] Project irc-test build #2: SUCCESS in 0.14 sec: https://10.102.17.104:8080/job/irc-test/2/ [03:28:23] *** Joins: thanosm (~thanosm@62.254.189.133) [04:58:38] *** Quits: felipef (~felipef@62.254.189.133) (Remote host closed the connection) [05:15:31] Project irc-test build #3: SUCCESS in 0.6 sec. See https://ci.spdk.io/spdk-jenkins for results. [05:36:08] Project irc-test build #4: SUCCESS in 0.1 sec. See https://ci.spdk.io/spdk-jenkins for results. [05:42:47] *** Quits: thanosm (~thanosm@62.254.189.133) (Ping timeout: 255 seconds) [06:00:08] *** Joins: thanosm (~thanosm@62.254.189.133) [06:11:50] *** Joins: felipef (~felipef@62.254.189.133) [06:16:05] *** Quits: felipef (~felipef@62.254.189.133) (Ping timeout: 255 seconds) [07:06:56] mszwed, thanks. I'll just rebuild my system, wasted too much time on it already :) Good to know using those versions it should work... [07:07:44] klateck, regarding darsto comment above, didn't you mention you fixed something related w/ccache or something?? [07:08:16] peluse: just in case - I'm using 18.04 which was updated from 16.04 (not clear installation) [07:11:24] OK, thanks [07:11:51] peluse I just cleaned the cache and it seems it was just a temporary fix. Cache size was OK for a long time (since Jenkins started) but recently it takes way much more space [07:12:08] Gotta set a limit or something so it will auto-clean [07:15:14] klateck, K, thanks [07:15:26] please keep us posted as it seems to be happening at least once per week [07:49:32] *** Joins: lhodev (~lhodev@inet-hqmc05-o.oracle.com) [07:51:33] *** Quits: lhodev (~lhodev@inet-hqmc05-o.oracle.com) (Remote host closed the connection) [07:52:04] *** Joins: lhodev (~lhodev@inet-hqmc05-o.oracle.com) [07:54:07] *** Quits: lhodev (~lhodev@inet-hqmc05-o.oracle.com) (Remote host closed the connection) [07:54:37] *** Joins: lhodev (~lhodev@inet-hqmc05-o.oracle.com) [07:56:40] *** Quits: lhodev (~lhodev@inet-hqmc05-o.oracle.com) (Remote host closed the connection) [07:57:11] *** Joins: lhodev (~lhodev@inet-hqmc05-o.oracle.com) [07:59:12] *** Quits: lhodev (~lhodev@inet-hqmc05-o.oracle.com) (Remote host closed the connection) [07:59:42] *** Joins: lhodev (~lhodev@inet-hqmc05-o.oracle.com) [08:01:45] *** Quits: lhodev (~lhodev@inet-hqmc05-o.oracle.com) (Remote host closed the connection) [08:02:16] *** Joins: lhodev (~lhodev@inet-hqmc05-o.oracle.com) [08:04:18] *** Quits: lhodev (~lhodev@inet-hqmc05-o.oracle.com) (Remote host closed the connection) [08:04:49] *** Joins: lhodev (~lhodev@inet-hqmc05-o.oracle.com) [08:06:51] *** Quits: lhodev (~lhodev@inet-hqmc05-o.oracle.com) (Remote host closed the connection) [08:07:22] *** Joins: lhodev (~lhodev@inet-hqmc05-o.oracle.com) [08:09:24] *** Quits: lhodev (~lhodev@inet-hqmc05-o.oracle.com) (Remote host closed the connection) [08:09:55] *** Joins: lhodev (~lhodev@inet-hqmc05-o.oracle.com) [08:11:58] *** Quits: lhodev (~lhodev@inet-hqmc05-o.oracle.com) (Remote host closed the connection) [08:12:28] *** Joins: lhodev (~lhodev@inet-hqmc05-o.oracle.com) [08:14:31] *** Quits: lhodev (~lhodev@inet-hqmc05-o.oracle.com) (Remote host closed the connection) [08:15:03] *** Joins: lhodev (~lhodev@inet-hqmc05-o.oracle.com) [08:17:04] *** Quits: lhodev (~lhodev@inet-hqmc05-o.oracle.com) (Remote host closed the connection) [08:17:35] *** Joins: lhodev (~lhodev@inet-hqmc05-o.oracle.com) [08:19:37] *** Quits: lhodev (~lhodev@inet-hqmc05-o.oracle.com) (Remote host closed the connection) [08:20:07] *** Joins: lhodev (~lhodev@inet-hqmc05-o.oracle.com) [08:22:10] *** Quits: lhodev (~lhodev@inet-hqmc05-o.oracle.com) (Remote host closed the connection) [08:22:40] *** Joins: lhodev (~lhodev@inet-hqmc05-o.oracle.com) [10:20:22] *** Quits: thanosm (~thanosm@62.254.189.133) (Read error: Connection reset by peer) [11:29:57] *** Quits: tomzawadzki (uid327004@gateway/web/irccloud.com/x-hdsgueljuojnqtba) (Quit: Connection closed for inactivity) [12:59:16] should I expect spdk to do faster Seq-writes over native nvme-cli? [12:59:46] I do not see SPDK driver outperform nvme cli in my fio runs [12:59:53] I am using fio-3.3 [12:59:57] latest spdk [13:00:28] also....what advantage is there to using perf? Is perf better than fio? [13:00:55] someone please explain as I am running these tests to gather metrics and determine if we can switch to SPDK [13:04:51] perf is much faster than fio, yes [13:05:12] you should expect to see better performance with SPDK if the block stack is your bottleneck [13:05:33] if the drive itself is already saturated (which can happen with sequential writes pretty easily), then you won't get any improvement [13:05:37] the drive can only go so fast [14:20:49] thanks @bwalker [14:21:45] but I am seeing better performance with the native nvme cli driver even on Random reads and writes...maybe I should switch to perf? [14:22:08] *** Joins: felipef (~felipef@cpc92310-cmbg19-2-0-cust421.5-4.cable.virginm.net) [14:55:09] can you outline what the nvme cli driver is? [14:55:25] *** Quits: felipef (~felipef@cpc92310-cmbg19-2-0-cust421.5-4.cable.virginm.net) (Remote host closed the connection) [14:55:42] do you mean fio using libaio engine to a device bound to the kernel nvme driver? [15:16:45] yes [15:17:12] libaio is what I use with the device bound to the kernel nvme driver [15:18:18] *** Joins: felipef (~felipef@cpc92310-cmbg19-2-0-cust421.5-4.cable.virginm.net) [15:18:37] *** Quits: felipef (~felipef@cpc92310-cmbg19-2-0-cust421.5-4.cable.virginm.net) (Remote host closed the connection) [15:18:51] *** Joins: Shuhei_ (caf6fc61@gateway/web/freenode/ip.202.246.252.97) [15:19:30] *** Quits: gila (~gila@5ED4D979.cm-7-5d.dynamic.ziggo.nl) (Quit: My Mac Pro has gone to sleep. ZZZzzz…) [16:10:49] and what sort of total performance are you seeing? [16:11:11] there is no scenario where the kernel driver should yield better performance than spdk, if using a raw block device [16:55:35] let me paste some numbers [16:56:28] also...by raw block device I am assuming you mean after a nvme format [16:58:25] I am using the nvme_manage utility to do nvme format [17:25:40] write: IOPS=17.3k, BW=2166MiB/s (2272MB/s)(20.0GiB/9454msec) [17:25:54] nvme cli [17:26:11] write: IOPS=17.2k, BW=2152MiB/s (2257MB/s)(20.0GiB/9516msec) [17:26:15] spdk [17:27:51] that is the data from 128K seq writes (20GB) after nvme format aka FOB [17:41:42] I am using the fio_plugin as my io engine when using SPDK ....(Assuming I cannot use libaio engine with spdk) ioengine=/Desktop/Git/spdk/examples/nvme/fio_plugin/fio_plugin [17:45:07] any ideas? [17:46:31] to setup spdk I ran > sudo HUGEMEM=8192 /scripts/setup.sh [17:56:35] Does this appear to be the latest...or if this version would yield worse performance? [17:56:37] Starting SPDK v19.04-pre / DPDK 18.11.0 initialization... [ DPDK EAL parameters: fio --no-shconf -c 0x1 -m 0 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid5503 ] EAL: Probing VFIO support... [18:02:25] If we use spdk_crc16_t10dif_copy and src and dst are overlapped, DIF insertion destroys data. [18:03:02] If spdk_crc_16_t10dif_copy do not by forward but by reverse ordering, no issue will occur. [18:03:26] I want to ask if ISA-L's API can use when src and dst are overlapped. [18:04:51] I will test if crc16_t10dif_copy will work but if it didn't work I may ask ISA-L team through github. [18:12:36] *** Quits: lhodev (~lhodev@inet-hqmc05-o.oracle.com) (Remote host closed the connection) [18:12:59] @bwalker - can you help answer my questions [18:13:13] *** Joins: lhodev (~lhodev@66-90-218-190.dyn.grandenetworks.net) [18:17:16] I posted an issue to ISA-L https://github.com/01org/isa-l/issues/54 [18:19:01] *** Joins: travis-ci (~travis-ci@ec2-54-162-96-90.compute-1.amazonaws.com) [18:19:02] (spdk/master) nbd: correct notes of spdk_nbd_start API (Xiaodong Liu) [18:19:02] Diff URL: https://github.com/spdk/spdk/compare/954728e9df0e...de976cf33180 [18:19:02] *** Parts: travis-ci (~travis-ci@ec2-54-162-96-90.compute-1.amazonaws.com) () [18:21:37] *** Quits: lhodev (~lhodev@66-90-218-190.dyn.grandenetworks.net) (Ping timeout: 244 seconds) [18:30:41] *** Joins: zhouhui (~wzh@114.255.44.140) [18:53:25] Shuhei_: I just left a comment in https://review.gerrithub.io/c/spdk/spdk/+/444831. It seems that you mistyped in github issue. [18:54:34] Yes I saw that and fixed. Thank you! [18:56:09] I will use dif + memcopy for now and want to wait for any feedback from ISA-L teams as I wrote in the gerrithub. [19:11:55] *** Quits: zhouhui (~wzh@114.255.44.140) (Quit: WeeChat 1.9.1) [20:04:03] *** Quits: Shuhei_ (caf6fc61@gateway/web/freenode/ip.202.246.252.97) (Ping timeout: 256 seconds) [20:05:02] nate12112_spdk: that is very likely the maximum bandwidth of your device and so which driver you use isn't going to matter [21:23:32] *** Joins: lhodev (~lhodev@66-90-218-190.dyn.grandenetworks.net) [23:01:04] Project autotest-nightly build #404: STILL FAILING in 1 min 2 sec. See https://ci.spdk.io/spdk-jenkins for results. [23:23:10] Project autotest-nightly-failing build #277: STILL FAILING in 23 min. See https://ci.spdk.io/spdk-jenkins for results.