[00:28:56] *** Joins: alekseymmm (050811aa@gateway/web/freenode/ip.5.8.17.170) [00:59:33] *** Joins: tkulasek (~tkulasek@192.55.54.42) [01:47:22] *** Joins: chenzhua (cdfcd96e@gateway/web/freenode/ip.205.252.217.110) [01:50:27] *** Quits: chenzhua (cdfcd96e@gateway/web/freenode/ip.205.252.217.110) (Client Quit) [01:57:06] i broke scan-build https://ci.spdk.io/spdk-jenkins/results/autotest-per-patch/builds/8309/archive/unittest_autotests/scan-build/report-a81577.html#Path7 [02:41:56] *** Joins: tomzawadzki (~tomzawadz@192.55.54.38) [03:03:45] *** Quits: tkulasek (~tkulasek@192.55.54.42) (Remote host closed the connection) [03:38:25] *** Quits: dlw (~Thunderbi@114.255.44.143) (Ping timeout: 248 seconds) [05:08:59] *** Quits: gila (~gila@5ED74129.cm-7-8b.dynamic.ziggo.nl) (Ping timeout: 260 seconds) [05:10:55] *** Joins: gila (~gila@static.214.50.9.5.clients.your-server.de) [05:10:56] Hello everyone. Small question. Is it possible to run fio with spdk fio_plugin test of some bdev and enable logs like what you do with -L flag ? [05:12:30] debug logs I mean. from SPDK_DEBUGLOG [05:34:03] *** Joins: lyan (~lyan@2605:a000:160e:2dd:4a4d:7eff:fef2:eea3) [05:34:27] *** lyan is now known as Guest87808 [05:53:11] It seems no one is here [06:04:32] alekseymmm, I'm sure someone who has used the plug in will be along shortly. It's still pretty early in the US and starting to get late in the other geos... [06:18:47] alekseymmm: right now you can only enable them at compile time - https://review.gerrithub.io/c/spdk/spdk/+/408477 [06:58:31] where does it log? If I use SPDK_DEBUGLOG in some module it will print into stdout or some file ? [07:00:54] *** Joins: aleksymm (050811aa@gateway/web/freenode/ip.5.8.17.170) [07:01:08] *** Quits: aleksymm (050811aa@gateway/web/freenode/ip.5.8.17.170) (Client Quit) [07:02:07] *** Joins: alekseymmm1 (d9429a37@gateway/web/freenode/ip.217.66.154.55) [07:07:35] stderr [07:08:54] So i have add those lines in fio plugin and use spdk_ debuglog. Configure with --enable-debug? [07:08:59] yup [07:09:19] Thx a lot. I will try it soon [07:25:42] The RPC for creating an lvol_bdevs says that it returns the name of the lvol. However, vbdev_lvol.c sets the name of the lvol_bdev to the UUID. Now, I wonder where the bug is, the RPC docs/comment or the vbdev_lvol.c. I personally, would like to be able to define the names myself. [07:40:24] *** Quits: alekseymmm1 (d9429a37@gateway/web/freenode/ip.217.66.154.55) (Ping timeout: 252 seconds) [07:49:28] gila: maybe I will surprise you but the UUID became lvol bdev name :D [07:50:36] Yes I know, thats my point :) [07:50:52] yes it is not what most of us expect but I was surprised also when writing RPC documentation. Whay the UUID is the bdev name this is a queston to jimharris or tomzawadzki [07:51:20] the name you can use to is the alias "LVS_NAME/LVOL_NAME" [07:52:57] Yes, however, a consequence is that you cant get the bdev by the name you have created it with, and you have to figure out the generated UUID first [07:54:02] yup, 100% true. [08:01:15] @pwodkowx bdev alias is optional, while bdev name is required. Since its less likely to encounter name conflict using UUID this is one used for bdev name field. Bdev alias is one that user assigned. [08:02:29] @tomzawadzki the bdev alias is not the bdev name rather lvol_store/name .. [08:03:35] as @pwodkowx mentioned [08:05:52] But still to me, putting the name as the bdev_name makes a lot more sense then fabricating UUID and having no control of the naming of the bdev. Added to that get_bdevs does not support searching by aliases. It kinda feels like create a file "foo" and then gets created with a UUID, and i have to scan the xattr to find "ah yes thats my foo file!" [08:06:35] get_bdevs does look through both, name and aliases [08:07:42] the lvs_name/lvol_name ? [08:07:59] because for me it did not work by just using lvol_name.. [08:08:44] Any call should work with UUID (bdev name) and LVS_NAME/LVOL_NAME (bdev alias) [08:09:18] right -- but not with lvol_name [08:12:37] gila: this is to handle case of two lvols with same name in different lvolstores [08:13:42] bdev names must be unique, and we also support lvol renaming - so using the uuid as the 'official' bdev name, and lvs_name/lvol_name as a bdev alias enables us to easily do both [08:14:58] I see. [08:15:36] but maybe there are improvements to our RPCs that are needed? [08:15:49] It feels a little inconsistent comparing to how for example, AIO bdevs are handled. It will simply fail the RPC. [08:16:18] Also, it requires me to not just know the lvol_name but also, the lvol_store [08:16:33] but don't you need to specify the name of the lvol store when creating the lvol? [08:17:44] Yes, that is true, but thinking ahead a bit, if I have lets say 100 lvol stores, and 1000 lvol_bdevs scattered over them, it becomes less intuitive to find the thing your are looking for [08:19:00] Besides that, all rpc calls return string (or multiple) that can be used in future calls. In case of lvol thats uuid, in aio name as provided, in nvme list of nvme namespaces exposed as bdevs. In this way it is consistent. [08:20:05] gila - are you guaranteeing yourself that the 1000 lvol_bdev names are unique? or are you relying on spdk/lvolstore to do it? [08:22:03] I'd argue that would be up the admin/user to handle that, but i would expect SPDK to prevent me from creating duplicate names. If the admin/user wants to use a UUID gen lib to create names, then its up to him. But thats just my personal view not saying one is better then the other.. [08:24:37] one of the problems is that SPDK can only guarantee the uniqueness within one system - i.e. user creates "lvol0" on system A and system B, then the SSD from system B gets moved to system A [08:24:56] now system A has two lvols named "lvol0" - that's why we prepend with the lvolstore name [08:25:32] good point, did not consider that [08:26:46] however, other systems have solved that for example ZFS handles this gracefully by allowing you to import something by a new name. It does this by temporary during import override the name [08:27:17] on a per-file basis? [08:27:37] pool (analogous to lvol_store) [08:27:38] incidentally - the scheme SPDK is using is very similar to LVM on Linux [08:28:06] right - so lvol_stores can also be renamed - we don't allow loading an lvol_store if there is already an lvol_store with that name loaded [08:28:38] Yeah because the lvol_store "import" is implicit [08:28:55] if there was a import_lvol that trick could be applied .. [08:29:35] that gets really messy though - it adds dependencies between lvolstores [08:29:59] if you have lvol stores on top of lvol stores you mean? [08:30:48] no - i mean if you want to load "lvol0", you have to check across all of the loaded lvolstores to see if an "lvol0" already exists - with the current scheme, each lvolstore is independent [08:32:17] in your setup, with 100 lvolstores and 1000 logical volumes across those lvolstores... [08:32:47] Yes - you would need to have a namespace of some sort that tracks all the lvol stores [08:34:07] ...can you describe to me where you get the name(s) of an lvol that you need to search for? [08:35:19] i'd like to understand more why it's not possible to specify the name of the lvol by lvs_name/lvol_name [08:36:03] or even by uuid - presumably when the lvol is created, all of this information could be saved into a management/admin database (not just the lvol_name) [08:36:30] gimme a sec @jimharris on a call. [08:38:50] why the lvol_bdev can't be volatile? [08:40:13] pwodkowx - can you explain? [08:40:41] like in nvme -> we generate this name during probe but it can change [08:41:19] this way UUID or alias - lvs_name/lvol_name would be persisten and if user provide it's own name it would be used instead. [08:41:36] lvs_name/lvol_name is the name that the user provided [08:41:50] uuid is the only one generated by spdk [08:42:17] lvs_name/lvol_name is persisted as well in blobstore metadata [08:43:45] yes, but lvl_name should be bdev_name by default. If there is a conflic use UUID as bdev_name. This way user have chance to rename/resolve this conflict. [08:45:03] we are talking from developers point of view that know SPDK and how it is working but as you see user are confused... [08:46:01] i disagree - once you figure out there's a conflict it's way too late [08:46:18] and it doesn't account for case where lvolstores are moved between systems [08:49:09] i'm open to suggestions on rpc enhancements to make some of this easier - but i'm still struggling to understand why specifying by lvs_name/lvol_name is not possible [08:50:12] imagine the case where you move an lvolstore from system A to system B - that lvolstore has 20 lvols in it; 10 of them have names that conflict with lvols already on system B, the other 10 do not [08:50:36] in that case you have to manually rename all 10 of those lvols [08:51:07] but with the existing scheme - worst case, the lvolstore itself has the same name and has to be renamed [08:52:34] agree about this but: [08:52:57] 1. maybe documentation is not enought verbose about how this idea [08:53:33] you mean how it works, or why it works that way (or both?) [08:53:34] 2. lvol_bdev name should work as symlink in /dev/dev_name -> /dev/maper/some_other_name [08:53:40] both [08:53:58] it does work that way - the lvs/lvol name is an alias to the uuid [08:54:29] about 2. so we can peak just any volatile name we want [08:54:29] this makes it super easy to rename an lvol while it is in use (you don't have to unregister and register the underlying bdev) [08:54:39] @jimharris it is certainly possible to supply the name of lvol_store/lvol_name -- but, as things are API driven these days, and a user (who knows nothing about storage, does not care about storage, and does not want to know anything about storage) simply needs to create a volume, typically giving it a name. Lets say, "mongodb1". When he later wants to get various info about his volume, its not odd for him to say something like, get [08:54:40] the stats of "mongodb1". Or "snap mongodb1". With the current case, he would not find it as he would need to know the lvol_store its created one, or, needs to know the UUID. Something he then, should write down somewhere. Assuming the user has no access to the whole storage instance, he would be stuck if he lost the UUID. [08:56:03] gila - doesn't this user presumably have to open a pool of some kind before creating the "mongodb1" volume? [08:56:53] Good morning guys. Any chance we can merge in https://review.gerrithub.io/#/c/spdk/spdk/+/420575/ today? [08:57:12] That depends .. he could simply be given an API endpoint where he can create lvols on, and has no further access to the storage system itself. [08:58:13] so whatever is giving the user that API endpoint could do the lvs_name stuff [08:58:46] yes for sure, there can be layers put on top that abstract things for the user [08:59:23] There would have to be, since user cant create the volume without knowing the volume store to create it on. [08:59:27] what spdk is really trying to do here is provide the core building blocks to do logical volumes - the expectation (at least my expectation) is that there are other management/configuration layers that are sitting on top of it, and users are not directly sending RPCs to an SPDK lvolstore [09:00:06] And my original question, was more about -- that the RPC methods says that it returns a name, while it didn't so I was wondering which was wrong. [09:00:19] @jimharris Yes -- thats fine :) [09:02:50] pwodkowx - regarding "pick just any volatile name we want" - pick this name when the lvol is created or when it is loaded? [09:04:04] loaded [09:04:57] eg lvol1 lvol2 lvol3 ... or if user want to name it "unicorn" pick "unicorn" as the bdev_name [09:05:02] how are those volatile names specified? when an nvme namespace is attached, and it contains an lvolstore with 100 lvols - when are these 100 names specified? [09:09:44] just like in '/dev/' those links are created - automaticaly. If getting bdev by UUID or lvs_name/lvol_name is working - more advanced users will use this for others having lvolX will be more than enought to access the lvol [09:10:14] *** Joins: peter_turschm (~peter_tur@66.193.132.66) [09:12:14] someone has to specify those lvolX names though - when does that happen? [09:12:35] assume we have an nvme namespace with 100 lvols [09:12:39] start application [09:12:41] attach this namespace [09:13:02] uuids and lvs_name/lvol_names are all automatically picked up from metadata [09:13:13] but how are these other lvolX names specified? [09:16:01] spdk_asnprintf("lvol%u", g_lvol_cnt++)? [09:19:44] i'm not seeing how that is helpful to a user - it's completely arbitrary and susceptible to complete and total reordering between application restarts [09:20:08] nvme namespaces could be attached out of order, adding new lvols could change the ordering, etc. [09:20:45] and i thought you said you wanted user to be able to name it "unicorn" :) [09:26:09] yup https://github.com/spdk/spdk/blob/master/lib/bdev/malloc/bdev_malloc.c@414 [09:26:45] 404 on that link [09:27:15] LOL gerrithub and github could use the same permlinks styles :P [09:27:16] https://github.com/spdk/spdk/blob/18f534465ca5ae9318b09f1a752f071df461d6ca/lib/bdev/malloc/bdev_malloc.c#L414 [09:28:04] but malloc isn't a real bdev :) and i think it's safe to say that's not the model we think is best [09:28:21] for example, nvme used to work like this, but now we require the user to specify the base bdev name [09:28:52] i would love to fix malloc to not allow this autogeneration [09:29:29] And I love this autogeneration when typing debug scripts :D [09:31:28] Didn't this start with specifying names, rather than "autogeneration" ? [09:32:15] I think for arbitral autogenerated names, UUID does just fine. [09:36:14] about https://review.gerrithub.io/#/c/spdk/spdk/+/413828/ [09:36:51] I have reworked start_nbd_disk() but it is quite masive change [09:37:16] I would rather not merge this before release, I will remove #spk-18.07 [09:37:35] i agree with your suggestion pwodkowx [09:37:42] i was just going to ask you about that one actually [09:38:05] I've slowly been removing autogenerated names - the only one left is malloc I think [09:38:39] can always implement autogenerated names at a higher level - like in a utility function in a test script [09:39:26] I'm sort of tempted to make all bdev names a UUID that we generate (or read from the device where possible) [09:39:36] and then only the aliases are the user defined names [09:39:41] basically symlinks [09:40:03] we may need some better RPC infrastructure to operate on those symlinks [09:41:45] fix for dpdk 18.05 intermittent failures - https://review.gerrithub.io/c/spdk/spdk/+/420659 [09:43:25] it's really nice of the "name" - the thing we use to determine uniqueness - is immutable [09:43:35] but we want to allow the users to change the name they refer to it by for convenience [09:44:51] bwalker: 2 patches here that need review for spdk-18.07 release: https://review.gerrithub.io/#/c/spdk/spdk/+/418679/ [09:52:10] darsto: this one needs a rebase https://review.gerrithub.io/#/c/spdk/spdk/+/420573/ [09:53:13] is it looks to me like there's one iscsi initiator bug fix patch, one additional test for blobstore, and the crypto bdev left for 18.07 [09:56:25] *** Joins: travis-ci (~travis-ci@ec2-54-167-199-126.compute-1.amazonaws.com) [09:56:26] (spdk/master) test/iscsi: replace config files from iscsi tests with json and rpc calls (Pawel Niedzwiecki) [09:56:26] Diff URL: https://github.com/spdk/spdk/compare/18f534465ca5...9668121208f7 [09:56:26] *** Parts: travis-ci (~travis-ci@ec2-54-167-199-126.compute-1.amazonaws.com) () [10:44:38] darsto - are you still there? [10:44:52] have a question for you on one of the iscsi initiator patches [10:50:46] jimharris: ask away [10:53:04] https://review.gerrithub.io/#/c/spdk/spdk/+/420571/3/lib/bdev/iscsi/bdev_iscsi.c [10:53:16] in the bdev_iscsi_finish path - do we still want to call the ->create_cb? [10:55:11] practically we don't have to right now [10:55:47] but we might refactor the bdev_iscsi rpc later on to allocate some context [10:56:01] like we do with many other rpcs [10:56:36] then, if we don't call the ->create_cb, we leak that context [10:57:16] i guess it's fine - i'm just trying to figure out how bdev_iscsi_finish could get called with one of these requests outstanding? [10:57:23] but i think the patch looks fine as is [11:00:35] +2 [11:15:39] *** Joins: tzawadzki (~tomzawadz@192.55.54.42) [11:16:02] *** Quits: tomzawadzki (~tomzawadz@192.55.54.38) (Ping timeout: 244 seconds) [11:20:37] *** Quits: tzawadzki (~tomzawadz@192.55.54.42) (Ping timeout: 268 seconds) [13:00:11] *** Joins: alekseymmm_ (bcf3adf1@gateway/web/freenode/ip.188.243.173.241) [13:30:37] Attention all, For approximately 24 hours starting at 8:28 PM UTC today, the Chandler build pool will be paused while one of our labs undergoes routine maintenance. During this time, the Jenkins build pool will still be providing results. [13:31:25] Correction: The build pool will be paused at 9:00 PM UTC. [13:32:54] I am running into a segmentation fault when trying to connect start a VM that talks to Vhost via vhost-scsi. Here my gdb backtrace Thread 5 "reactor_28" received signal SIGSEGV, Segmentation fault. [13:32:55] [Switching to Thread 0x7fffeb59a700 (LWP 16436)] [13:32:55] 0x00005555555cce49 in rte_vhost_enable_guest_notification (vid=vid@entry=1, queue_id=queue_id@entry=2, [13:32:55] enable=enable@entry=0) at vhost.c:426 [13:32:55] 426 dev->virtqueue[queue_id]->used->flags = VRING_USED_F_NO_NOTIFY; [13:32:57] (gdb) bt [13:32:59] #0 0x00005555555cce49 in rte_vhost_enable_guest_notification (vid=vid@entry=1, queue_id=queue_id@entry=2, [13:33:01] enable=enable@entry=0) at vhost.c:426 [13:33:03] #1 0x00005555555ab2ee in start_device (vid=1) at vhost.c:1039 [13:33:05] #2 0x00005555555cb82d in vhost_user_msg_handler (vid=, fd=fd@entry=84) at vhost_user.c:1346 [13:33:07] #3 0x00005555555ca65f in vhost_user_read_cb (connfd=connfd@entry=84, dat=dat@entry=0x7fffdc000ee0, [13:33:09] remove=remove@entry=0x7fffeb599c78) at socket.c:293 [13:33:11] #4 0x00005555555cd45b in fdset_event_dispatch (arg=0x55555583f4a0 ) at fd_man.c:273 [13:33:13] #5 0x00007ffff68f96db in start_thread (arg=0x7fffeb59a700) at pthread_create.c:463 [13:33:15] #6 0x00007ffff662288f in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95 [13:33:17] (gdb) [13:33:42] The VM is using both local lvols and remote lvols over NVMe-oF [13:34:20] Any idea what could be going wrong? [13:41:28] I took out the remote lvols and VM started successfully with just the local lvols. So looks like the issue is caused by the connection to the remote lvols via NVMe-oF [13:41:47] how much memory in the VM? [13:41:57] I ran perf on the system targeting those remote lvols successfully [13:42:12] VM has 4 Gb [13:42:21] I am using 1Gb hugepage [13:42:52] which part of that line 426 is NULL? [13:43:07] p dev [13:43:16] p dev->virtqueue[2] [13:45:39] p dev [13:45:39] $1 = [13:46:06] does it repo with a debug vhost build? [13:46:07] p dev->virtqueue[2] [13:46:07] value has been optimized out [13:47:50] if i had to guess, it's related to timing between the SET_MEM_TABLE vhost message (which triggers the RDMA mappings) and the subsequent start_device [13:48:54] no other error messages before the segfault? [13:49:38] not from Vhost [13:49:44] but in dmesg I see [13:50:01] hmmm - no, can't be the rdma mappings - we don't start setting those up until after where this segfault happens [13:50:04] just a single VM? [13:50:07] virbr0: port 2(vnet0) entered blocking state [13:50:08] [ 5129.636818] virbr0: port 2(vnet0) entered disabled state [13:50:08] [ 5129.637024] device vnet0 entered promiscuous mode [13:50:08] [ 5129.637596] virbr0: port 2(vnet0) entered blocking state [13:50:08] [ 5129.637599] virbr0: port 2(vnet0) entered listening state [13:50:09] [ 5131.644244] virbr0: port 2(vnet0) entered learning state [13:50:10] [ 5133.660250] virbr0: port 2(vnet0) entered forwarding state [13:50:12] [ 5133.660264] virbr0: topology change detected, propagating [13:50:14] [ 5146.047189] reactor_28[12873]: segfault at 7f0d3ffd6000 ip 000055d840dd3e49 sp 00007f0f1b770a40 error 6 in spdk_tgt[55d840d5b000+e4000] [13:50:17] [ 5146.246144] virbr0: port 2(vnet0) entered disabled state [13:50:19] [ 5146.267153] device vnet0 left promiscuous mode [13:50:21] [ 5146.267162] virbr0: port 2(vnet0) entered disabled state [13:50:23] Yes just one VM [13:55:00] *** Joins: tsg_ (~tsg@134.134.139.72) [13:58:07] *** Quits: bwalker (~bwalker@134.134.139.72) (ZNC - http://znc.in) [13:58:26] *** Joins: bwalker_ (~bwalker@134.134.139.72) [13:58:27] *** Server sets mode: +cnt [13:58:29] *** Quits: klateck (~klateck@134.134.139.72) (Ping timeout: 268 seconds) [13:58:33] *** Quits: changpe1 (~changpe1@134.134.139.72) (Ping timeout: 264 seconds) [13:58:34] *** Quits: lgalkax (lgalkax@nat/intel/x-llofzwoohdjffmol) (Ping timeout: 264 seconds) [13:58:34] *** Quits: pniedzwx (pniedzwx@nat/intel/x-egumxzfyyrpipzui) (Ping timeout: 264 seconds) [13:58:34] *** Quits: pbshah1 (~pbshah1@134.134.139.72) (Ping timeout: 264 seconds) [13:58:39] *** Quits: jimharris (jimharris@nat/intel/x-hfcrcmlgolxnbegw) (Ping timeout: 260 seconds) [13:59:30] *** Quits: mszwed (mszwed@nat/intel/x-mlaliwtpevykkaym) (Ping timeout: 256 seconds) [13:59:30] *** Quits: pwodkowx (~pwodkowx@134.134.139.72) (Ping timeout: 256 seconds) [13:59:30] *** Quits: bwalker (~bwalker@134.134.139.72) (Ping timeout: 256 seconds) [13:59:30] *** Quits: jkkariu (jkkariu@nat/intel/x-rsikuzdkvrylmapo) (Ping timeout: 256 seconds) [13:59:46] *** Quits: sethhowe (sethhowe@nat/intel/x-usnkpiyziwgzhvav) (Ping timeout: 264 seconds) [13:59:49] *** Quits: vermavis (vermavis@nat/intel/x-xjvlthukiuowpupk) (Ping timeout: 260 seconds) [14:01:47] *** Joins: pbshah1 (~pbshah1@134.134.139.72) [14:03:18] *** Joins: pniedzwx (~pniedzwx@134.134.139.72) [14:03:48] *** Joins: gangcao (gangcao@nat/intel/x-glfrfbyusfydljjd) [14:04:22] *** Joins: lgalkax (lgalkax@nat/intel/x-jsemijadqoadypvg) [14:05:49] *** Joins: changpe1 (changpe1@nat/intel/x-kghqjvazkbpwqggi) [14:07:24] *** Joins: ziyeyang (ziyeyang@nat/intel/x-bsjsvdwajzwyhdfp) [14:09:26] *** Joins: jimharris (jimharris@nat/intel/x-smpfmoxydwuzhjyn) [14:09:26] *** ChanServ sets mode: +o jimharris [14:09:52] *** Joins: ppelplin (ppelplin@nat/intel/x-vehhsggxcazkkxzu) [14:10:22] *** Joins: klateck (klateck@nat/intel/x-nrbpxrepblqshqgj) [14:11:53] *** Joins: kjakimia (kjakimia@nat/intel/x-wnivetlcilimajxj) [14:12:54] *** Joins: pzedlews (~pzedlews@134.134.139.72) [14:13:55] *** Joins: vermavis (vermavis@nat/intel/x-xkdtzwmhcitsnifd) [14:15:20] *** Joins: pwodkowx (~pwodkowx@134.134.139.72) [14:15:56] *** Joins: mszwed (mszwed@nat/intel/x-uavyncxluceesvta) [14:16:26] *** Joins: peluse (~peluse@134.134.139.72) [14:16:57] *** Joins: sethhowe (~sethhowe@134.134.139.72) [14:17:27] *** Joins: jkkariu (jkkariu@nat/intel/x-frharzkpigcqiliq) [15:08:44] *** Quits: alekseymmm_ (bcf3adf1@gateway/web/freenode/ip.188.243.173.241) (Quit: Page closed) [15:10:31] *** bwalker_ is now known as bwalker [15:10:44] *** ChanServ sets mode: +o bwalker [15:13:42] sethhowell: I just made a comment on one of the nvmf patches [15:15:08] sethhowe [15:21:06] thx. fixing it now. [15:22:06] your second to last patch failed tests too, but I'm sure you know [15:52:37] yep. Rebase issue. Plus I am still working out a couple of timing things on the final patch. [16:00:31] bwalker: Do you happen to know how long it takes to run the OSS script if everything runs smoothly? [16:00:59] I thought it was like 20 minutes historically [16:05:38] OK good. I think that it is runnign through everything properly this time. From what I can understand from the docs, the first time that you run kwbuildproject on each project after upgrading, it has to perform a synchronization. I ran the script again and it has been going strong for 10 minutes. I'll keep you posted. [16:49:00] bwalker: I was actually wrong about my last theory, but found the proper fix, and we successfully ran klocwork. I sent a follow up e-mail. [16:51:19] +2! [17:18:46] *** Quits: peter_turschm (~peter_tur@66.193.132.66) (Remote host closed the connection) [18:24:17] *** Quits: Guest87808 (~lyan@2605:a000:160e:2dd:4a4d:7eff:fef2:eea3) (Quit: Leaving)