Home liberachat/#haskell: Logs Calendar

Logs on 2022-06-23 (liberachat/#haskell)

00:00:34 <hololeap> I was going to ask a question about this, but I think I figured it out
00:01:45 × unit73e quits (~emanuel@2001:818:e8dd:7c00:32b5:c2ff:fe6b:5291) (Ping timeout: 244 seconds)
00:03:47 × bitdex quits (~bitdex@gateway/tor-sasl/bitdex) (Remote host closed the connection)
00:04:02 HackingSpring joins (~haru@2804:431:c7f5:d4eb:75fd:791c:59a2:7773)
00:04:47 × califax quits (~califax@user/califx) (Remote host closed the connection)
00:05:53 califax joins (~califax@user/califx)
00:06:03 × Inoperable quits (~PLAYER_1@fancydata.science) (Excess Flood)
00:06:06 × ezzieyguywuf quits (~Unknown@user/ezzieyguywuf) (Ping timeout: 264 seconds)
00:06:38 dlbh^ joins (~dlbh@50.237.44.186)
00:09:00 bitdex joins (~bitdex@gateway/tor-sasl/bitdex)
00:09:51 Inoperable joins (~PLAYER_1@fancydata.science)
00:10:01 alp_ joins (~alp@user/alp)
00:12:44 ezzieyguywuf joins (~Unknown@user/ezzieyguywuf)
00:13:40 vysn joins (~vysn@user/vysn)
00:23:09 werneta joins (~werneta@70-142-214-115.lightspeed.irvnca.sbcglobal.net)
00:26:30 × gurkenglas quits (~gurkengla@dslb-002-207-014-022.002.207.pools.vodafone-ip.de) (Ping timeout: 264 seconds)
00:27:50 sebastiandb joins (~sebastian@pool-108-31-128-56.washdc.fios.verizon.net)
00:28:25 × [itchyjunk] quits (~itchyjunk@user/itchyjunk/x-7353470) (Ping timeout: 256 seconds)
00:30:17 × ezzieyguywuf quits (~Unknown@user/ezzieyguywuf) (Ping timeout: 248 seconds)
00:31:21 × Colere quits (~colere@about/linux/staff/sauvin) (Ping timeout: 248 seconds)
00:32:30 [itchyjunk] joins (~itchyjunk@user/itchyjunk/x-7353470)
00:33:55 Colere joins (~colere@about/linux/staff/sauvin)
00:36:58 × xff0x quits (~xff0x@b133147.ppp.asahi-net.or.jp) (Ping timeout: 240 seconds)
00:40:57 × alp_ quits (~alp@user/alp) (Ping timeout: 248 seconds)
00:48:18 × mon_aaraj quits (~MonAaraj@user/mon-aaraj/x-4416475) (Ping timeout: 240 seconds)
00:50:28 mon_aaraj joins (~MonAaraj@user/mon-aaraj/x-4416475)
00:56:04 ezzieyguywuf joins (~Unknown@user/ezzieyguywuf)
01:01:52 <johnw> you wouldn't be able to implement that
01:02:04 <johnw> just hoping that's what you figured out :)
01:02:18 × jao quits (~jao@cpc103048-sgyl39-2-0-cust502.18-2.cable.virginm.net) (Ping timeout: 240 seconds)
01:02:29 <dolio> Just ask the djinni.
01:10:36 × arahael quits (~arahael@118.211.187.178) (Ping timeout: 258 seconds)
01:10:39 nate4 joins (~nate@98.45.169.16)
01:24:06 × dlbh^ quits (~dlbh@50.237.44.186) (Ping timeout: 264 seconds)
01:24:27 aeka` joins (~aeka@2606:6080:1001:d:c59c:6e9a:3115:6f2f)
01:25:44 × aeka quits (~aeka@user/hiruji) (Ping timeout: 248 seconds)
01:25:44 aeka` is now known as aeka
01:26:50 × hpc quits (~juzz@ip98-169-32-242.dc.dc.cox.net) (Ping timeout: 240 seconds)
01:26:59 xff0x joins (~xff0x@125x103x176x34.ap125.ftth.ucom.ne.jp)
01:27:03 <zzz> johnw: i spent 10 minutes playing with sequence before it hit me
01:28:41 arahael joins (~arahael@203.63.7.203)
01:29:02 hpc joins (~juzz@ip98-169-32-242.dc.dc.cox.net)
01:30:11 × aeka quits (~aeka@2606:6080:1001:d:c59c:6e9a:3115:6f2f) (Ping timeout: 255 seconds)
01:31:00 aeka joins (~aeka@user/hiruji)
01:33:38 × sebastiandb quits (~sebastian@pool-108-31-128-56.washdc.fios.verizon.net) (Ping timeout: 240 seconds)
01:33:43 × waleee quits (~waleee@2001:9b0:213:7200:cc36:a556:b1e8:b340) (Ping timeout: 244 seconds)
01:36:25 × Kaipei quits (~Kaiepi@156.34.47.253) (Read error: Connection reset by peer)
01:38:36 pavonia joins (~user@user/siracusa)
01:39:25 × _xor quits (~xor@74.215.182.83) (Quit: brb)
01:39:38 × zebrag quits (~chris@user/zebrag) (Ping timeout: 240 seconds)
01:40:33 Kaiepi joins (~Kaiepi@156.34.47.253)
01:40:50 × toluene quits (~toluene@user/toulene) (Ping timeout: 240 seconds)
01:41:31 × machinedgod quits (~machinedg@66.244.246.252) (Ping timeout: 256 seconds)
01:43:01 toluene joins (~toluene@user/toulene)
01:48:38 × nate4 quits (~nate@98.45.169.16) (Ping timeout: 240 seconds)
01:50:58 × yrlnry quits (~yrlnry@pool-108-2-150-109.phlapa.fios.verizon.net) (Remote host closed the connection)
01:51:23 × HackingSpring quits (~haru@2804:431:c7f5:d4eb:75fd:791c:59a2:7773) (Remote host closed the connection)
01:51:35 yrlnry joins (~yrlnry@pool-108-2-150-109.phlapa.fios.verizon.net)
01:52:27 nate4 joins (~nate@98.45.169.16)
01:54:40 kannon joins (~NK@74-95-14-193-SFBA.hfc.comcastbusiness.net)
01:55:38 × yrlnry quits (~yrlnry@pool-108-2-150-109.phlapa.fios.verizon.net) (Ping timeout: 240 seconds)
01:55:57 × kannon quits (~NK@74-95-14-193-SFBA.hfc.comcastbusiness.net) (Read error: Connection reset by peer)
01:58:54 × nate4 quits (~nate@98.45.169.16) (Ping timeout: 264 seconds)
02:04:33 <hololeap> johnw, you can, if a) z is a monoid b) you already have a z lying around
02:05:43 <hololeap> so, I'm kinda just throwing the INLINE pragma on all of my pointfree functions. is this reasonable?
02:07:32 <DigitalKiwi> idk sounds kind of pointless
02:07:37 <monochrom> hahaha
02:07:45 <monochrom> I think you should benchmark.
02:07:55 <DigitalKiwi> ba dum tsch
02:09:54 <hololeap> I'm just wondering if this is a reasonable heuristic, not really asking if it will _always_ speed things up, but if it will speed _some_ things up without causing any problems
02:10:29 <monochrom> Then I don't know.
02:12:08 <hololeap> also, is there a wrapper that will transform a semigroup into a monoid, kind of like how MaybeApply will transform an Apply to an Applicative?
02:12:29 <hololeap> something in base or semigroupoids that I'm just not spotting
02:13:00 <hololeap> oh, I guess Maybe works
02:13:13 <hololeap> there it is :)
02:13:22 <monochrom> Oh haha it's already in base.
02:13:33 <hololeap> thanks, monochrom
02:13:39 <EvanR> +1 to that
02:14:30 <zzz> lol
02:14:57 <zzz> thanks for Nothing
02:15:09 <monochrom> hahahaha
02:15:40 <monochrom> Secret Santa put it in base!
02:16:16 × liz quits (~liz@host86-159-158-175.range86-159.btcentralplus.com) (Quit: leaving)
02:20:46 brettgilio joins (~brettgili@c9yh.net)
02:21:31 × ezzieyguywuf quits (~Unknown@user/ezzieyguywuf) (Quit: leaving)
02:26:31 hnOsmium0001 joins (uid453710@user/hnOsmium0001)
02:29:08 <hololeap> :t \z = swap . first (fromMaybe z) . traverse (swap . second Just)
02:29:10 <lambdabot> error: parse error on input ‘=’
02:29:18 <hololeap> :t \z -> swap . first (fromMaybe z) . traverse (swap . second Just)
02:29:19 <lambdabot> (Traversable t, Semigroup a) => a -> t (b, a) -> (t b, a)
02:31:52 ezzieyguywuf joins (~Unknown@user/ezzieyguywuf)
02:36:06 × Kaiepi quits (~Kaiepi@156.34.47.253) (Ping timeout: 264 seconds)
02:38:03 <zzz> ok question:
02:39:38 <zzz> traverse is defined in terms of sequence, and sequence is in turn defined in terms of traverse
02:40:21 nate4 joins (~nate@98.45.169.16)
02:40:42 <zzz> more specifically
02:40:54 _xor joins (~xor@74.215.182.83)
02:40:55 <zzz> traverse f = sequenceA . fmap f
02:40:58 <zzz> and
02:41:13 <zzz> sequenceA = traverse id
02:42:04 <zzz> is this a "lie" or am i missing something?
02:44:50 <zzz> ok nvm i was missing something
02:45:05 terrorjack joins (~terrorjac@2a01:4f8:1c1e:509a::1)
02:45:48 leeb joins (~leeb@KD106155002239.au-net.ne.jp)
02:45:55 <zzz> more specitically
02:46:26 <zzz> {-# MINIMAL traverse | sequenceA #-}
02:46:26 × nate4 quits (~nate@98.45.169.16) (Ping timeout: 246 seconds)
02:46:51 × yauhsien quits (~yauhsien@61-231-23-53.dynamic-ip.hinet.net) (Remote host closed the connection)
02:47:23 × brettgilio quits (~brettgili@c9yh.net) (Quit: The Lounge - https://thelounge.chat)
02:49:33 brettgilio joins (~brettgili@c9yh.net)
02:54:05 yrlnry joins (~yrlnry@pool-108-2-150-109.phlapa.fios.verizon.net)
02:54:49 × mvk quits (~mvk@2607:fea8:5ce3:8500::4588) (Ping timeout: 248 seconds)
02:54:58 × yrlnry quits (~yrlnry@pool-108-2-150-109.phlapa.fios.verizon.net) (Remote host closed the connection)
02:55:36 yrlnry joins (~yrlnry@pool-108-2-150-109.phlapa.fios.verizon.net)
02:57:08 × jrm quits (~jrm@user/jrm) (Quit: ciao)
02:58:28 jrm joins (~jrm@user/jrm)
02:58:46 × yrlnry quits (~yrlnry@pool-108-2-150-109.phlapa.fios.verizon.net) (Remote host closed the connection)
03:01:28 × td_ quits (~td@muedsl-82-207-238-203.citykom.de) (Ping timeout: 268 seconds)
03:02:50 td_ joins (~td@94.134.91.184)
03:03:06 × vysn quits (~vysn@user/vysn) (Ping timeout: 264 seconds)
03:03:15 × jrm quits (~jrm@user/jrm) (Client Quit)
03:04:15 <dsal> `sequenceA` always lies and `traverse` always tells the truth, so it works.
03:04:28 jrm joins (~jrm@user/jrm)
03:06:35 azimut joins (~azimut@gateway/tor-sasl/azimut)
03:07:51 yauhsien joins (~yauhsien@61-231-23-53.dynamic-ip.hinet.net)
03:12:25 × Unicorn_Princess quits (~Unicorn_P@93-103-228-248.dynamic.t-2.net) (Quit: Leaving)
03:18:09 <zzz> it could work even if you didn't know which is which
03:22:33 × [itchyjunk] quits (~itchyjunk@user/itchyjunk/x-7353470) (Remote host closed the connection)
03:24:13 eggplant_ joins (~Eggplanta@108-201-191-115.lightspeed.sntcca.sbcglobal.net)
03:26:08 × eggplantade quits (~Eggplanta@2600:1700:bef1:5e10:99c9:a0a4:f69e:b22e) (Ping timeout: 268 seconds)
03:28:15 yangby joins (~secret@115.206.19.11)
03:28:56 × yangby quits (~secret@115.206.19.11) (Client Quit)
03:30:24 nate4 joins (~nate@98.45.169.16)
03:34:45 × ezzieyguywuf quits (~Unknown@user/ezzieyguywuf) (Remote host closed the connection)
03:35:04 <Axman6> "SequenceA: I am lying" -> Exception: <<loop>>
03:36:05 ezzieyguywuf joins (~Unknown@user/ezzieyguywuf)
03:39:18 × nate4 quits (~nate@98.45.169.16) (Ping timeout: 240 seconds)
03:40:26 <monochrom> heh
03:40:31 × zso quits (~inversed@97e3d74e.skybroadband.com) (Ping timeout: 256 seconds)
03:41:42 × toluene quits (~toluene@user/toulene) (Quit: Ping timeout (120 seconds))
03:42:59 inversed joins (~inversed@97e3d74e.skybroadband.com)
03:43:12 toluene joins (~toluene@user/toulene)
03:47:35 lisbeths joins (uid135845@id-135845.lymington.irccloud.com)
03:51:55 <Axman6> Reminds me of my favourite (and only) logic joke: Three logicians walk into a bar. The bartender asks "Would you all like a drink?". The first one say "I don'
03:52:12 <Axman6> "I don't know", the second one says "I don't know", and the third one says "Yes".
03:52:31 <Axman6> says*
03:52:52 × causal quits (~user@50.35.83.177) (Quit: WeeChat 3.5)
03:56:18 × winny quits (~weechat@user/winny) (Remote host closed the connection)
03:56:45 winny joins (~weechat@user/winny)
03:57:29 × ezzieyguywuf quits (~Unknown@user/ezzieyguywuf) (Ping timeout: 246 seconds)
04:00:03 ezzieyguywuf joins (~Unknown@user/ezzieyguywuf)
04:01:39 jargon joins (~jargon@184.101.186.108)
04:05:11 ski joins (~ski@remote11.chalmers.se)
04:05:36 × ezzieyguywuf quits (~Unknown@user/ezzieyguywuf) (Ping timeout: 268 seconds)
04:13:50 × Vajb quits (~Vajb@hag-jnsbng11-58c3a8-176.dhcp.inet.fi) (Read error: Connection reset by peer)
04:14:03 Vajb joins (~Vajb@85-76-45-183-nat.elisa-mobile.fi)
04:21:09 ezzieyguywuf joins (~Unknown@user/ezzieyguywuf)
04:29:10 justsomeguy joins (~justsomeg@user/justsomeguy)
04:31:05 × ezzieyguywuf quits (~Unknown@user/ezzieyguywuf) (Ping timeout: 246 seconds)
04:31:18 nate4 joins (~nate@98.45.169.16)
04:36:24 × nate4 quits (~nate@98.45.169.16) (Ping timeout: 272 seconds)
04:37:22 ezzieyguywuf joins (~Unknown@user/ezzieyguywuf)
04:43:16 Kaiepi joins (~Kaiepi@156.34.47.253)
04:53:46 × arkeet quits (arkeet@moriya.ca) (Quit: ZNC 1.8.2 - https://znc.in)
04:57:02 misterfish joins (~misterfis@ip214-130-173-82.adsl2.static.versatel.nl)
04:59:02 × jargon quits (~jargon@184.101.186.108) (Remote host closed the connection)
05:02:30 × vglfr quits (~vglfr@coupling.penchant.volia.net) (Ping timeout: 276 seconds)
05:07:29 × justsomeguy quits (~justsomeg@user/justsomeguy) (Ping timeout: 246 seconds)
05:08:11 moet joins (~moet@mobile-166-171-250-122.mycingular.net)
05:08:16 × yauhsien quits (~yauhsien@61-231-23-53.dynamic-ip.hinet.net) (Remote host closed the connection)
05:09:48 <moet> hi.. i'm running `hoogle server --port=8088` in a virtual machine guest and trying to browse it from the VM host. firefox requests the page in https, but then upgrades all the resources (css, etc) to https and cannot load them (because hoogle isn't serving https)
05:10:12 <moet> i can't tell if this is an issue with hoogle or with firefox, so i tried out safari and the same thing is happening.. this makes me think it's an issue with hoogle
05:11:19 yrlnry joins (~yrlnry@pool-108-2-150-109.phlapa.fios.verizon.net)
05:11:40 <moet> any ideas about what to try next? i'm able to curl (over http) the hoogle html and css and other resources
05:12:29 triteraflops joins (~triterafl@user/triteraflops)
05:12:54 <triteraflops> Ho! What news?
05:13:53 × yrlnry quits (~yrlnry@pool-108-2-150-109.phlapa.fios.verizon.net) (Remote host closed the connection)
05:14:31 yrlnry joins (~yrlnry@pool-108-2-150-109.phlapa.fios.verizon.net)
05:14:43 <triteraflops> I'm trying to wrap my head around large objects.
05:14:47 <triteraflops> hm.
05:15:13 <triteraflops> Reading this back, maybe I should stop trying to wrap my head around large objects.
05:15:16 yauhsien joins (~yauhsien@61-231-23-53.dynamic-ip.hinet.net)
05:16:07 <triteraflops> anyway, large objects shouldn't be copied. A mutation can be represented as a pure operation if the large object is not aliased.
05:16:50 <triteraflops> And haskell will not automatically mutate large objects, even when it could. Which means the compiler can't detect whether an object is aliased.
05:16:58 <triteraflops> But why is that a hard problem?
05:17:09 <triteraflops> Why can't GHC detect this?
05:17:36 × yrlnry quits (~yrlnry@pool-108-2-150-109.phlapa.fios.verizon.net) (Remote host closed the connection)
05:19:46 <moet> i'm not sure i understand what you're seeing vs what you want to see triteraflops... can you state it in terms of this example? `let foo = Foo{field1=Just ..., ..., fieldN=...}; let bar = foo{field1=Nothing}; in ...` assume that Foo contains many large fields
05:19:52 takuan joins (~takuan@178-116-218-225.access.telenet.be)
05:20:41 <triteraflops> moet: the cleanest example I can think of is a function I wrote recently which speeds itself up using a hashmap.
05:20:44 × yauhsien quits (~yauhsien@61-231-23-53.dynamic-ip.hinet.net) (Ping timeout: 272 seconds)
05:20:51 <triteraflops> moet: let me see if I can find it.
05:23:17 vglfr joins (~vglfr@88.155.20.3)
05:23:37 jargon joins (~jargon@184.101.186.108)
05:23:46 × jargon quits (~jargon@184.101.186.108) (Remote host closed the connection)
05:24:12 jargon joins (~jargon@184.101.186.108)
05:24:44 × bilegeek quits (~bilegeek@2600:1008:b06f:8528:b8b4:9bf9:3a8:ef97) (Quit: Leaving)
05:25:30 <triteraflops> moet: aw hell no lol. This example I found is actually needlessly complicated for demonstrating the basic point.
05:30:52 <davean> triteraflops: why do you think it is reasonable to tell if something has multiple reference?
05:31:11 <triteraflops> davean: not all the time. Just some of the time.
05:32:13 <triteraflops> if multiple references can only arise from some kind of fork of the form (f x x), then you should be able to tell.
05:32:25 <davean> Thats not the only way it can
05:32:31 yauhsien joins (~yauhsien@61-231-23-53.dynamic-ip.hinet.net)
05:32:32 <triteraflops> davean: how else?
05:32:40 <davean> litterly every reference to it creates a reference
05:33:11 <davean> They only get cleaned up when computation is forced, or the GC simplifies
05:33:22 <davean> the only thing that knows how many things refer to some thing is the GC
05:33:46 <davean> every time you say something like "fieldN x" thats a reference that doesn't get resolved until demand
05:33:53 <davean> EVERYTHING IS A HELD REFERENCE
05:33:59 <davean> EVERYTHING
05:34:22 <triteraflops> so maybe it can't be done in runtime
05:34:31 <triteraflops> but perhaps it can be done at compile time
05:34:53 arkeet joins (arkeet@moriya.ca)
05:35:20 <triteraflops> as long as the functions are inspectable by ghc. FFI functions are clearly ineligible
05:35:37 <triteraflops> Can't you tell how many times a function uses one of its inputs?
05:35:40 <triteraflops> at compile time?
05:35:57 <davean> Its not about how many times it uses it
05:35:58 × leeb quits (~leeb@KD106155002239.au-net.ne.jp) (Ping timeout: 240 seconds)
05:36:13 <davean> "f x" that x, how many times is it already referenced?
05:36:24 <davean> You can optimize it to produce fewer intermediate copies
05:36:33 <davean> but you can't know how many times x is already referenced
05:37:05 <triteraflops> Sometimes, you can.
05:37:37 <triteraflops> let x' = copy x in g x'
05:37:45 <davean> There is no "copy x"
05:37:49 <davean> but lets go simpler
05:37:50 <triteraflops> but there could be
05:38:07 <davean> ok, how many times is the argument referenced if I say "f (Foo ...)"
05:38:07 leeb joins (~leeb@KD106154144179.au-net.ne.jp)
05:38:16 <triteraflops> if there were, there could be some kind of compiler optimisation for it
05:38:27 neoatnebula joins (~neoatnebu@49.206.16.59)
05:38:42 <triteraflops> davean: which argument
05:38:50 <davean> (Foo ...)
05:38:56 <davean> How many times is (Foo ...) referenced?
05:39:13 <triteraflops> are we f, or are we calling f?
05:39:30 mbuf joins (~Shakthi@122.164.15.160)
05:39:34 <davean> Calling f say
05:39:52 Batzy_ joins (~quassel@user/batzy)
05:39:59 <triteraflops> (Foo 45) is brand new, so there's only one of them
05:40:08 <davean> there are between 0 and N of them
05:40:24 <triteraflops> It's brand new. How could there be?
05:40:41 × mjs22 quits (~mjs22@76.115.19.239) (Quit: Leaving)
05:41:19 <davean> well if f is inlined, then Foo is never constructed
05:41:32 <davean> if the same code exists elsewhere, it might be lifted an be passed as a reference
05:41:42 <davean> Litterly between 0 and N copies
05:41:51 nate4 joins (~nate@98.45.169.16)
05:41:55 <davean> 1 is not even the probable case
05:42:54 triteraflops looks up litterly in the wiktionary
05:43:05 × Batzy quits (~quassel@user/batzy) (Ping timeout: 255 seconds)
05:43:13 <davean> Literally
05:43:33 <davean> I was in no way being anything but exact in that statement and you can't narrow it down more
05:45:06 <triteraflops> so... making no demands on the number of references to a potential (Foo 45) allows for optimisations that would be impossible otherwise?
05:45:14 <triteraflops> interesting
05:45:27 <triteraflops> but would preclude other optimisations that could be done and may be more important
05:46:13 <davean> if I say (field1 f)+(field2 f) how many refences to f are there?
05:46:36 <triteraflops> ah, well, this is a clear example of a form (f x x)
05:46:58 <triteraflops> so you couldn't use compile time analysis to allow mutation
05:47:20 × nate4 quits (~nate@98.45.169.16) (Ping timeout: 272 seconds)
05:48:08 <triteraflops> on the other hand, if it were h . g . f $ (Foo 45)
05:48:13 <davean> well really? then what about "add f = (field1 f)+(field2 f)"
05:48:41 <davean> I *really* don't think you get what non-strict means.
05:48:51 <davean> I already said it could eliminate intermediate copies
05:48:51 <triteraflops> oh great, now it's recursive
05:48:59 <davean> theres nothing recursive there
05:49:14 <triteraflops> ohhh now I get it
05:49:29 <triteraflops> This is a function definition, not one part of a let statement
05:49:30 <triteraflops> right
05:49:54 <triteraflops> so the add function you define might be OK, or might not, depending on the type of (field1 f)
05:50:11 <triteraflops> If it's an Int, say, then maybe you could get away with it.
05:50:13 <davean> Say they're both Int
05:50:38 <triteraflops> The simplest optimisation algorithm would look at add, see that it is using f twice, and give up
05:51:15 <triteraflops> I'm basically looking for the simplest case where haskell could prove mutation were possible
05:51:54 <davean> So I'm getting a bit tired, so I'm just going to say again that the compiler can eliminate intermediate copies - thats standard code optimization using local reasoning, but the mutation you claim requires global reasoning.
05:52:09 <triteraflops> not always
05:52:14 <triteraflops> it can be made local with explicit copy
05:52:24 <triteraflops> or with the creation of a constant object
05:52:25 <davean> No you can't
05:52:34 <davean> haskell is non-strict
05:52:40 <davean> you can't even make the copy happen
05:52:49 <davean> "copy x" is its self a reference to x
05:53:06 <triteraflops> What about seq?
05:53:11 <davean> what about seq?
05:53:18 <triteraflops> Haskell isn't always non-strict
05:53:23 <triteraflops> strictness can be enforced
05:53:39 <davean> Here is where I direct you to the report and you learn what seq is
05:53:54 <davean> but also, all you're doing here is creating MORE copying not less
05:54:02 <triteraflops> davean: You make interesting assumptions of my knowledge.
05:54:05 <davean> Because this is *strictly more copies than eliminating intermediate copies*
05:54:22 <davean> triteraflops: They're not assumptions, they're responses to what you're saying
05:54:25 Midjak joins (~Midjak@82.66.147.146)
05:54:30 gurkenglas joins (~gurkengla@dslb-002-207-014-022.002.207.pools.vodafone-ip.de)
05:55:20 <triteraflops> If a function needs to change a large object a million times before returning it, it would be nice not having to copy it every time.
05:55:36 <triteraflops> So you do one copy, maybe, depending, and use that copy to prove mutation is safe.
05:55:40 <davean> So hold up
05:55:52 <davean> you just said a different statement, and I keep saying it can infact eliminate intermediate copies
05:56:08 <davean> it never needs to produce anything but the final returned thing
05:56:23 <davean> infact, almost never will - also for the same reasons as above
05:57:09 <triteraflops> so if I insert 100 million ints into a hashmap using a single function, haskell will not make a copy every time?
05:57:57 <davean> it depends on the exact code, but often no, Haskell is non-strict
05:58:28 <triteraflops> copy elimination does not necessarily follow from laziness.
05:58:46 <davean> Correct, I said it depends on the exact code
05:58:50 <davean> also I didn't say lazy
05:59:06 <triteraflops> You mean to say lazy and strict aren't opposites?
05:59:10 <davean> Correct
05:59:14 <triteraflops> oh bloody hell
05:59:32 <triteraflops> This might help explain some of the confusing.
05:59:35 <triteraflops> *ion
05:59:55 <davean> lazy and strict are on opposite sides, there are a few other things over with lazy, and there is an entire ocean in between
06:00:14 coot joins (~coot@213.134.190.95)
06:00:17 nate4 joins (~nate@98.45.169.16)
06:01:05 michalz joins (~michalz@185.246.204.97)
06:01:06 <davean> This is why I referenced the Haskell Report when you brought up seq
06:01:46 <triteraflops> "Lazy and strict aren't opposites. They're *opposites*."——davean
06:03:18 <davean> I mean Europe are Egypt are on opposite sides of the med. but Egypt isn't the opposite side, its some of what isn't Europe.
06:03:33 <davean> and there is the med inbetween
06:03:48 <davean> Haskell is non-strict, it isn't lazy
06:04:01 × Sgeo quits (~Sgeo@user/sgeo) (Read error: Connection reset by peer)
06:04:24 <triteraflops> This single sentence contradicts every description of haskell I've ever read.
06:04:52 <davean> The report is clear about haskell being non-strict
06:05:15 <davean> things CAN be eagerly evaluated if it doesn't violate some information rules
06:05:20 <davean> and infact is
06:05:30 <davean> in GHC and other implimentations
06:05:41 <pavonia> What's the difference beween non-strict and lazy?
06:06:25 <davean> pavonia: basicly with non-strict you don't see bottoms from undemanded computations
06:06:34 <davean> pavonia: but with laziness non-demanded computations are not evaluated
06:06:56 <davean> so Haskell can be strict up to the point that it would prove it wasn't lazy
06:07:07 <triteraflops> So, haskell *can* use static analysis to avoid copying large objects?
06:07:19 <davean> triteraflops: *head desk*
06:07:47 <davean> triteraflops: I can use static analysis to avoid making the copies in the first place, locally
06:08:07 <davean> there are some global things one could do, but they're VERY limited
06:08:12 <pavonia> I don't understand that expanation, tbh
06:08:19 <davean> pavonia: ok, uh
06:08:25 <triteraflops> Your explanations are pretty cryptic really.
06:08:43 <davean> so like a Haskell implimentation can specualatively evaluate something, or fuse computation
06:09:36 <triteraflops> It just seems like you're getting really upset at my choice of words, and repeating what I said, only using different words, rather than just saying yes or no
06:09:54 <davean> triteraflops: But the details are all that matter here
06:10:09 <davean> This is a very exacting thing
06:11:25 <triteraflops> It is pretty clear that the answer to "So, haskell *can* use static analysis to avoid copying large objects?" is yes, right now.
06:11:43 <triteraflops> or at least a "sometimes"
06:11:46 <triteraflops> which is also a yes
06:11:57 <davean> triteraflops: It can use static analysis to avoid making copies, it can't use it to allow mutation really.
06:12:20 <davean> Not in any practical sense at least to the later
06:12:22 <triteraflops> maybe it depends on your definition of mutation
06:12:39 <triteraflops> My idea of mutation is allowing x' = f x to reuse x's memory
06:12:40 <davean> one is a code optimization, one is a data dependency thing.
06:12:57 <triteraflops> at least sometimes
06:13:12 <davean> yah, and that it can't do
06:13:19 <davean> Not sanely almsot ever at least
06:13:36 <davean> Some VERY local cases, it could, but by not forming them like that in the first place really
06:13:49 <triteraflops> and I'm saying that static analysis can be easily conceived which would allow the memory reuse.
06:14:24 <davean> and I'm saying you don't get how fast and easily references leak depending on strictness - this depends on EXACTLY how the entire thing compiles
06:14:28 <triteraflops> You just need a new object to start or a data dependency-breaking operation. Some kind of copy.
06:14:49 <davean> once you're copying it, thats the one copy you might make anyway
06:14:58 <davean> because all we need to get out is the end result
06:15:33 <Axman6> Haskell hets a hell of a lot easier to understand when you realise it's basically all just case statements - and because of that, lots of optimisation can be built - like if you have case Foo a b c of Bar ... -> ...; Foo x y z -> res - you can eliminate that Foo ever being constructed and just pass in a b c for x y z in res - BAM, no large object recreation
06:15:46 <Axman6> triteraflops: coming back to your first question, in your mind, what is a large object?
06:16:05 <triteraflops> Axman6: a 4 gigabyte 3D array of float32s
06:16:25 <Axman6> Sounds like the perfect thing for the ST monad
06:16:39 <davean> right but thats the programmer reusing the memory
06:17:08 ubert joins (~Thunderbi@p200300ecdf0da521adf2b2fea6746db1.dip0.t-ipconnect.de)
06:17:21 <triteraflops> actually davean, it looks like the ST monad is filling the role of the copy operation that you said didn't exist lol
06:17:33 <davean> triteraflops: its ... not
06:18:38 <triteraflops> Axman6: or a 4GB hashmap being generated incrementally by a single function, then returned
06:19:11 <triteraflops> or a hashmap of the same size, requiring updates
06:19:37 <triteraflops> this last case, I don't think even the static analysis could allow, but I'd have to think about it.
06:22:04 <Axman6> a hashmap isn't a single large object though
06:22:17 <davean> It depends on the implimentation
06:22:22 <davean> which was a point I made above
06:22:28 <triteraflops> Suppose it were made a single large object
06:22:29 <Axman6> the Haskell unordered-containers implementation anyway
06:22:38 <davean> Axman6: well thats one VERY specific implimentation
06:22:52 <davean> its no cuckoo hasking for example
06:23:38 <davean> Data.Array cuckoo hashing would be close to a single object
06:24:34 <davean> Array !i !i !Int (Array# e)
06:24:42 <davean> Not REALLY a single object there, but we're not far off
06:25:11 <Axman6> right, but in that case, if you were strictly after the performance and semantics (i.e. not persistent) of a traditional hashmap, then you would need to use true mutation to avoid copying - and luckily, we can do that
06:25:35 <davean> Axman6: He was asking about static analysis on Haskell for optimizations though
06:25:45 <davean> Not how to impliment something well :)_
06:25:55 <davean> An entirely different discussion
06:27:06 lortabac joins (~lortabac@2a01:e0a:541:b8f0:fd56:2c1b:68f1:73e4)
06:27:40 <triteraflops> Well, a hashmap implementation may be forced to copy the entire hashmap in order to add one item, right?
06:28:00 <Axman6> sure, I get that, but I'm saying the way we avoid copying large objects is by not copying large objects
06:28:00 <triteraflops> Depends on implementation, but it *may* be forced. Suppose this is the kind of implementation we have.
06:28:01 <davean> depending on the implimentation and code ...
06:28:10 MajorBiscuit joins (~MajorBisc@c-001-001-031.client.tudelft.eduvpn.nl)
06:28:34 <davean> triteraflops: the way to provide the semantics to get what you want is probably fusion BTW
06:28:40 <Axman6> Data.HashMap wipp copy at most O(log n) objects
06:28:42 <davean> Its not Haskell
06:28:56 <davean> Axman6: He's specificly not talking about that though
06:29:02 <Axman6> Haskell isn't some magic language that automatically solve poor coding
06:29:51 z0k joins (~z0k@206.84.141.12)
06:30:03 <triteraflops> how about something like hm & insert ka va & insert kb vb & insert kc vc
06:30:16 <triteraflops> That's one expression
06:30:30 <davean> triteraflops: yes, and it can produce a single new hashmap just fine
06:30:42 <davean> theres no reason it has to produce intermediate ones, as I've mentioned before
06:31:44 <davean> that can collapse the 3 inserts into one function
06:31:45 × nate4 quits (~nate@98.45.169.16) (Ping timeout: 268 seconds)
06:31:49 <triteraflops> You may have tried explaining it, but think about what it means when you're not understood. Is it really my fault? Usually there is blame on both sides of a communications failure.
06:32:59 <davean> Haskell is pure, so "insert ka va & insert kb vb & insert kc vc" can become a single piece of code that produces no intermediate forms
06:33:04 <triteraflops> Like, you seem upset right now. My internal audio simulation of your voice has you shouting loudly right now.
06:33:15 <davean> I'm not upset, I'm more amused.
06:33:54 <triteraflops> That is not what is being communicated right now. Think about what that means.
06:34:18 × yauhsien quits (~yauhsien@61-231-23-53.dynamic-ip.hinet.net) (Remote host closed the connection)
06:34:36 × bitdex quits (~bitdex@gateway/tor-sasl/bitdex) (Remote host closed the connection)
06:35:11 <davean> so remember when I mentioned optimizing code vs. data flow?
06:35:21 <davean> we're right back to that.
06:35:38 bitdex joins (~bitdex@gateway/tor-sasl/bitdex)
06:35:41 <triteraflops> So, that 3x insert example I wrote up there. That can be done with a single copy.
06:35:57 × echoreply quits (~echoreply@2001:19f0:9002:1f3b:5400:ff:fe6f:8b8d) (Quit: WeeChat 2.8)
06:35:57 <davean> And I agreed it could be, but not by reusing the memory
06:36:24 <triteraflops> The memory is being reused. That's waht a single copy means.
06:36:32 <davean> Is it? :)
06:36:33 <triteraflops> When it's one copy and not three, memory is being reused.
06:36:41 <triteraflops> Just not the original memory.
06:36:46 <davean> Ah no, thats one particular way to impliment it
06:36:55 <davean> you keep running back into that assumption
06:37:07 <davean> you keep thinking about the data, and I keep pointing you at the code
06:37:10 <davean> so think abotu the code for a bit
06:37:13 echoreply joins (~echoreply@45.32.163.16)
06:38:15 <davean> we don't need to know anything about hm or anything to avoid copies, we can make code that only produces the result
06:38:36 jakalx parts (~jakalx@base.jakalx.net) (Error from remote client)
06:41:36 <davean> its write once
06:41:48 <triteraflops> Let's try a different example. What is a way of storing a 4GB array of floats in haskell? I know there are several, but basically, just pick one that is actually going to store the floats, and not an array of thunks or something.
06:42:21 <triteraflops> I would call such an array a strict array
06:42:21 <davean> You'll need an unboxed vector or something
06:42:31 <triteraflops> so this vector, then.
06:42:58 <davean> YOu're going to run int othe same code thing as above, pure code can collapse into a fused function
06:43:48 <davean> but happy to get there naturally
06:44:03 <triteraflops> This is 4 gigs of data. It will exist. It must in order for the program to make any sense.
06:44:39 <davean> yep
06:44:43 <triteraflops> so you could have v & mod na xa & mod nb xb & mod nc xc
06:44:52 <davean> what is mod?
06:44:53 yauhsien joins (~yauhsien@61-231-23-53.dynamic-ip.hinet.net)
06:45:01 <triteraflops> oh yeah mod is already a function
06:45:05 jakalx joins (~jakalx@base.jakalx.net)
06:45:22 <davean> well usually mod is the modulus operator which doesn't make sense here
06:45:23 <triteraflops> ok, import qualified Vector as V
06:45:36 <triteraflops> so you could have v & V.mod na xa & V.mod nb xb & V.mod nc xc
06:45:48 <triteraflops> short for modify
06:46:12 <triteraflops> maybe set would be better
06:46:35 <triteraflops> v & set na xa & set nb xb & set nc xc
06:46:50 <davean> Right so you know what "set na xa & set nb xb & set nc xc" as a function is?
06:47:27 <triteraflops> If I were implementing set, I would be forced to copy the whole array. I can't think of any way around that, besides like a
06:47:29 <davean> "copy the input until na, copy in xa, resume copying to nb, copy in xb, resume cpying to nc, copy in xc, resume coping to end"
06:47:41 <davean> Thats what that function is
06:48:14 <davean> See how theres no intermediate forms of v?
06:48:19 <davean> You just got directly to the result
06:49:00 <triteraflops> If set's output is a vector, how am I not forced to copy the input?
06:49:05 <triteraflops> I clearly am
06:49:18 <davean> You're forced to copy the parts of the input that aren't changed
06:49:41 × yauhsien quits (~yauhsien@61-231-23-53.dynamic-ip.hinet.net) (Ping timeout: 246 seconds)
06:50:59 <triteraflops> I can make a Plan type, which could store a list of operations, then get executed.
06:51:21 <triteraflops> Then it would look like
06:51:48 <triteraflops> v & plan & set na xa & set nb xb & set nc xc & ex
06:52:00 <triteraflops> Then only ex would copy
06:52:02 <triteraflops> That's betetr
06:52:25 <davean> You don't need plan, because Haskell is pure. You just need plan to impliment it yourself.
06:52:46 <davean> (Not saying the compiler WILL give you what you want, but morally it can)
06:52:53 <triteraflops> morally...
06:53:02 <davean> (The code approach I generated is allowed - and infact you will sometimes get it)
06:53:19 <davean> because it will optimize the functions into one function
06:53:25 <triteraflops> You didn't write any code
06:53:48 <davean> "copy the input until na, copy in xa, resume copying to nb, copy in xb, resume cpying to nc, copy in xc, resume coping to end" <-- I'm refering to memory operations here
06:53:58 <davean> thats machine pseudo code, not haskell code
06:54:03 <davean> the compile doesn't produce Haskell
06:54:28 <davean> a sequence of sets is a fatehr
06:54:32 <davean> a sequence of sets is a gather operation
06:54:33 <triteraflops> I know the compiler doesn't produce Haskell.
06:54:58 <triteraflops> It's not a gather unless it is implemented as such
06:55:10 <davean> No, it is a gather, it might just not execute as one
06:55:13 <davean> haskell is pure
06:55:30 <davean> consider why I keep mentioning that
06:55:39 <lortabac> davean: I am trying to follow what you say, but I'm lost
06:55:52 <lortabac> are you trying to explain rewrite rules?
06:55:56 <davean> lortabac: Oh thats kinda expecting, I'm specificly trying to poke at triteraflops' missunderstandings
06:56:03 <davean> lortabac: no, I did mention them above though
06:56:08 <triteraflops> Ah, Miss Understanding
06:56:19 <triteraflops> Sorry, couldn't resist that one.
06:56:24 <davean> lortabac: I'm actually trying to explain closer to inlining here
06:57:02 <triteraflops> A vector type like this pretty much needs to be internal, and may not be safe.
06:57:09 <davean> (I say that since inlining is most of how GHC actualyl gets this in practice)
06:57:17 <triteraflops> It is only made pure by things like mandatory opies
06:57:19 <triteraflops> copies
06:57:28 <lortabac> how does inline avoid building intermediate structures?
06:57:30 <davean> triteraflops: Thats actually not true! In practice!
06:58:05 <triteraflops> There are internal operations that are not pure. I know there are.
06:58:12 <triteraflops> It's how the pure stuff is implemented.
06:58:21 <triteraflops> some of it, anyway
06:58:53 <davean> triteraflops: Some of it, but nothing I mentioned here about that vector stuff goes outside the pure stuff, strictly standards haskell stuff
06:59:59 <davean> lortabac: So the usall analogy I'd use is you just setup the definition of the final thing and you pull it like a thread and leave everything that doesn't end up in it behind, the vector case is a bit more complicated
07:00:10 <davean> lortabac: but I think you'd want the setup to get there ...
07:00:25 <davean> so lortabac how much do you know coming into this?
07:01:02 <davean> and have you ever stepped through inlining manually?
07:01:02 <lortabac> well I know what rewrite rules are, and what inlining is, but I don't understand what you are saying
07:01:30 <davean> lortabac: do you not know how inlining can avoid intermediate forms in general, or the specific example I gave?
07:01:39 <triteraflops> lortabac: so I wrote an example of a series of vector operations that should be possible with a single copy
07:01:53 <triteraflops> if v is a 4 GB vector of float32s, say
07:01:57 <triteraflops> no thunks
07:02:26 <triteraflops> and set n x v modifies v, setting element n to x
07:02:36 <triteraflops> you could have an expression of the form
07:02:50 <triteraflops> v set na xa & set nb xb & set nc xc
07:02:53 <triteraflops> oops
07:02:56 <triteraflops> v & set na xa & set nb xb & set nc xc
07:03:13 <triteraflops> This should be possible with a single copy
07:03:51 <triteraflops> So, davean, you're saying GHC natively supports vector types that can consolidate multiple set operations like this?
07:04:00 <lortabac> AFAIK it's not possible with a single copy
07:04:18 <davean> triteraflops: I'm saying the ability to do that is inherent to purity
07:04:31 <lortabac> not without rewrite rules at least
07:04:31 <triteraflops> davean: no it isn't
07:04:42 <triteraflops> a pure implementation could copy every time
07:04:48 <triteraflops> and still be pure
07:04:51 <davean> triteraflops: it *could* it doesn't have to
07:05:13 vysn joins (~vysn@user/vysn)
07:05:14 <davean> because "set na xa & set nb xb & set nc xc" is equivilent to the fused function
07:05:25 <triteraflops> davean: well, you just daid that all pure implementations must do this with a single copy
07:05:26 zeenk joins (~zeenk@2a02:2f04:a301:3d00:39df:1c4b:8a55:48d3)
07:05:27 <davean> so the compile is strictly allowed to replace that with the fused version
07:05:28 <lortabac> davean: can you show exactly how purity would give you this optimization?
07:05:40 <davean> triteraflops: I did *not* say they must, I said it was inherent to purity
07:05:51 <triteraflops> That's what inherent to purity means.
07:06:31 <davean> lortabac: purity *allows* this particular one, you'd get it from the optimizer in this particular case
07:06:37 yauhsien joins (~yauhsien@61-231-23-53.dynamic-ip.hinet.net)
07:06:55 <lortabac> davean: ok, but how would you get it without rewrite rules?
07:07:36 <davean> lortabac: well a SSA style analysis gets it for you
07:07:39 <lortabac> I mean, what is the exact transformation that the optimizer will (or can) perform
07:07:45 <davean> SSA will do this
07:07:57 <davean> Thats an optimization pass that will exactly generate that
07:08:19 <lortabac> what is SSA?
07:08:25 <davean> Single static assignment
07:08:37 <davean> basicly what we're doing here is keeping the last writes
07:08:55 <davean> but in the abstract
07:09:44 <davean> Theres a few other ways to get that vector example above
07:09:51 <davean> its actually a pretty easy one to get ...
07:10:40 <lortabac> thanks
07:10:51 <lortabac> does GHC actually do it?
07:10:56 <davean> Almost any symbolic approach to computation will get that
07:11:29 <triteraflops> idk why you say that. This clearly requires special consideration at the level of GHC's vector implementation.
07:11:40 <davean> That particular one? No, partially because Vector is specificly optimized, I don't THINK GHC would get it if Vector wasn't in the way? ... hum, I know how to write fast code with GHC but I don't know all the passes in depth ...
07:11:47 <davean> triteraflops: No, it does NOT
07:12:02 alp_ joins (~alp@user/alp)
07:12:14 <davean> It requires an abstract interpritation step for optimization
07:12:18 <davean> which can apply to any code
07:13:08 <triteraflops> GHC needs to implement this vector at some point
07:13:33 <triteraflops> This implementation will support certain operations.
07:13:46 <triteraflops> These operations may or may not be pure
07:13:52 × taeaad quits (~taeaad@user/taeaad) (Quit: ZNC 1.7.5+deb4 - https://znc.in)
07:13:54 <davean> Yes but those operations can be emergent at the assembly level
07:14:06 <davean> and the code is pure
07:14:46 <triteraflops> so there's some kind of deep optimisation that gives you copy elimination for free?
07:14:55 <davean> Its not even deep
07:15:12 <triteraflops> or like general
07:15:18 <davean> And I explained to you how simpler forms come out explicitely from being lazy, this is a deeper one but its very shallow
07:15:20 <davean> yes
07:15:29 <triteraflops> a general optimisation that also works on the assembly operations as they apply to the vector...
07:15:30 <davean> in general you can eliminate most of this stuff in pure code
07:15:40 <davean> Yes
07:15:49 <davean> This is what optimizing compilers do, and what they've done for ages
07:16:01 <davean> Haskell has it better because more of these optimizations apply more of the time because its pure
07:16:04 taeaad joins (~taeaad@user/taeaad)
07:16:36 <davean> but yes, we've basicly been over the basics of what happened in the 1980s to compilers
07:16:36 × lortabac quits (~lortabac@2a01:e0a:541:b8f0:fd56:2c1b:68f1:73e4) (Quit: WeeChat 2.8)
07:17:21 <davean> This "magic" can be sensative to disruption, and GHC isn't the best at doing it well but yah, this is very standard entirely general code optimization stuff
07:17:43 <davean> That can and often ends up in practice, apply to litterly any haskell you writwe
07:18:19 <davean> which is part of what Axman6's comment about case statements got to actually
07:18:42 <triteraflops> This is getting interesting now.
07:19:14 lortabac joins (~lortabac@2a01:e0a:541:b8f0:1762:327:502f:fc6c)
07:19:26 <davean> https://en.wikipedia.org/wiki/Static_single-assignment_form you'll note it is a subset of continuation passing style, and *that* is deeply related to functional programming
07:19:34 <davean> so it shouldn't supprise you it applies :)
07:19:43 <davean> SSA is just a particularly easy to understand optimization
07:20:07 <triteraflops> aaah, SSA was ticking some memories
07:20:17 <triteraflops> ***tickling
07:20:27 <merijn> Essentially, the main topic of optimisation in our compiler course was: here's some neat optimisations
07:20:45 <merijn> here's why you can't do any of them in C, because they're near impossible to prove correct in a mutable setting
07:20:57 <triteraflops> spirv uses ssa
07:21:21 <merijn> triteraflops: Nearly any compiler/optimisation tool that is remotely serious/relevant uses SSA ;)
07:21:33 <davean> and yah, triteraflops we have ways to do stuff even better than the compiler in Haskell but they're extensions, not standards compliant Haskell
07:22:01 <triteraflops> well, it looks like if something doesn't need extra copies, it likely won't incur them
07:22:08 <davean> merijn: right! Optimizations actually apply in general to Haskell because of purity :)
07:22:13 <triteraflops> with some provisos lol
07:22:41 <davean> triteraflops: I mean uh, shouldn't. Again, GHC not PARTICULARLY great at reliably applying what it has
07:22:45 <davean> but its pretty good
07:22:54 <davean> you know like 80% of the time it'll work :)
07:23:12 <davean> And another 15% of the time a small wiggle gets it to work
07:23:38 × moet quits (~moet@mobile-166-171-250-122.mycingular.net) (Ping timeout: 246 seconds)
07:23:41 <davean> but yah triteraflops when I decided to engage in this conversation I knew it was going to be a long one
07:23:50 <davean> perspective changes take a while
07:24:29 <triteraflops> well, um, you're also not super good at communicating your ideas lol
07:24:42 × MajorBiscuit quits (~MajorBisc@c-001-001-031.client.tudelft.eduvpn.nl) (Quit: WeeChat 3.5)
07:24:45 <triteraflops> but it's working
07:24:50 nate4 joins (~nate@98.45.169.16)
07:24:54 <triteraflops> I am starting to understand what you mean.
07:24:55 <davean> I should get to bed now though
07:25:12 MajorBiscuit joins (~MajorBisc@c-001-001-031.client.tudelft.eduvpn.nl)
07:25:14 jakalx parts (~jakalx@base.jakalx.net) (Disconnected: Replaced by new connection)
07:25:15 jakalx joins (~jakalx@base.jakalx.net)
07:26:20 <davean> "merijn here's why you can't do any of them in C, because they're near impossible to prove correct in a mutable setting" <--- it still is sinking in even today just how important purity is for optimizing code.
07:28:01 <triteraflops> So, I started with the impression that haskell couldn't mutate large objects, even when it would be safe.
07:28:12 <triteraflops> But now, I see that it actually can.
07:28:44 <triteraflops> The 3x set example with the vector demonstrates this.
07:30:29 odnes joins (~odnes@5-203-220-108.pat.nym.cosmote.net)
07:30:50 × merijn quits (~merijn@c-001-001-018.client.esciencecenter.eduvpn.nl) (Quit: leaving)
07:31:08 × caasih quits (sid13241@id-13241.ilkley.irccloud.com) (Quit: Updating details, brb)
07:31:17 caasih joins (sid13241@id-13241.ilkley.irccloud.com)
07:32:02 × gurkenglas quits (~gurkengla@dslb-002-207-014-022.002.207.pools.vodafone-ip.de) (Ping timeout: 246 seconds)
07:35:14 vpan joins (~0@212.117.1.172)
07:35:20 benin0 joins (~benin@183.82.30.117)
07:39:00 <davean> Briefly back to mention you might notice SSA is directly related as a form of non-strictness. but I have to leave you to consider that on your own.
07:41:38 <triteraflops> So, haskell basically sorta can mutate large objects in cases like the 3x set case above. But it would be inaccurate to call it mutation, per se, because the three sets are not time ordered operations being applied individually to an array. SSA is fusing them.
07:41:40 × neoatnebula quits (~neoatnebu@49.206.16.59) (Quit: Client closed)
07:42:12 <triteraflops> into a single copy+modify operation
07:43:39 <triteraflops> and yeah, this potential reordering and fusing is a kind of non strictness.
07:44:36 gmg joins (~user@user/gehmehgeh)
07:51:24 machinedgod joins (~machinedg@66.244.246.252)
07:54:00 × odnes quits (~odnes@5-203-220-108.pat.nym.cosmote.net) (Ping timeout: 272 seconds)
07:54:39 odnes joins (~odnes@5-203-220-108.pat.nym.cosmote.net)
07:56:35 chele joins (~chele@user/chele)
07:58:40 merijn joins (~merijn@86-86-29-250.fixed.kpn.net)
08:00:13 × tzh quits (~tzh@c-24-21-73-154.hsd1.or.comcast.net) (Quit: zzz)
08:04:00 <merijn> @tell Hecate I disagree with your assessment that ASCII literals for ByteString should die. It's incredibly nice for a lot of the "text" based protocols (HTTP/SMTP/etc.), what should die is error-less partial conversions (see also my comment on the bytestring issue), I already campaigned for that, I dunno, 7 years ago.
08:04:00 <lambdabot> Consider it noted.
08:06:42 <Hecate> merijn: I see
08:08:09 × superbil quits (~superbil@1-34-176-171.hinet-ip.hinet.net) (Ping timeout: 265 seconds)
08:08:25 <merijn> Hecate: also, if you're curious about semi-cursed application of ByteString's ForeignPtr I got those too :p
08:08:36 superbil joins (~superbil@1-34-176-171.hinet-ip.hinet.net)
08:08:51 <merijn> Hecate: https://github.com/merijn/Belewitte/blob/master/benchmark-analysis/src/Utils/Vector.hs
08:11:44 cfricke joins (~cfricke@user/cfricke)
08:13:49 <Hecate> merijn: thank you :)
08:14:03 <Hecate> merijn: btw, do you know how I could check for memory fragmentation of pinned bytestrings?
08:14:57 <maerwald[m]> https://well-typed.com/blog/2021/01/fragmentation-deeper-look/
08:15:58 × ezzieyguywuf quits (~Unknown@user/ezzieyguywuf) (Ping timeout: 240 seconds)
08:16:14 <Hecate> thanks maerwald[m] <3
08:17:08 <merijn> Hecate: High-level guess work from my side says that they shouldn't fragment the haskell heap at all
08:17:44 <merijn> Ah, hmm, pack maybe does
08:19:43 <merijn> I dunno the details of the mutablePinnedByteArray# stuff enough
08:20:01 <lortabac> triteraflops: there are several concepts involved, one is sharing which makes you for example the parts of a data-structure that are not modified
08:20:52 <triteraflops> Is there a word missing in that sentence?
08:20:59 <triteraflops> I'm getting parse errors lol
08:21:11 <lortabac> one is rewrite rules, which are ad-hoc transformations based on templates
08:21:41 <triteraflops> I knew about list fusion
08:21:59 <lortabac> yes, list fusion is performed through rewrite rules
08:22:29 <lortabac> from what I understood, the discussion was quite abstract and detached from what GHC really does
08:22:53 <triteraflops> But I couldn't see how rewrite rules could fuse multiple inserts into a hashmap or multiple vector mutations
08:22:55 <lortabac> in fact GHC does not optimize the vector example through SSA
08:23:03 <triteraflops> That was the part I thought haskell couldn't do at all
08:23:09 <triteraflops> or rather couldn't optimise at all
08:23:10 ccntrq joins (~Thunderbi@dynamic-095-112-145-116.95.112.pool.telefonica.de)
08:24:09 <lortabac> triteraflops: oh sorry a word was missing indeed, I meant "makes you reuse the parts..."
08:25:03 <triteraflops> yeah I knew about subcomponent reuse
08:25:23 <triteraflops> and again considered it irrelevant to the vector example, like fusion
08:27:10 <lortabac> yes, sharing does not make vectors fuse magically
08:28:00 <triteraflops> I can think of other examples involving vectors. Like an iteration in conway's game of life for instance
08:29:43 <triteraflops> or an iteration step comprised of several linear operations on a vector
08:29:51 <lortabac> I think you can achieve fusion in vector by using their stream interface
08:30:36 <triteraflops> well, I think davean is basically right about the assembly level optimisation catching a lot of this.
08:31:40 <lortabac> in theory yes, but I don't think GHC actually does it
08:32:03 <lortabac> optimizations happen at a much higher level in GHC
08:32:15 Pickchea joins (~private@user/pickchea)
08:32:27 × eggplant_ quits (~Eggplanta@108-201-191-115.lightspeed.sntcca.sbcglobal.net) (Remote host closed the connection)
08:33:03 eggplantade joins (~Eggplanta@2600:1700:bef1:5e10:99c9:a0a4:f69e:b22e)
08:33:46 <triteraflops> The way of doing an iteration 10 linear operations is to do one initial copy to v, allocate a new array, w, then do W = Av, v = Aw, etc.
08:33:55 <triteraflops> *iteration comprised of
08:34:24 <triteraflops> or maybe v = Bw
08:35:52 <triteraflops> If you try inlining 10 matrix vector operations, the result will be a total fucking mess
08:36:17 <triteraflops> maybe
08:36:56 × alp_ quits (~alp@user/alp) (Remote host closed the connection)
08:37:04 <triteraflops> actually maybe not lol. The compiler may inline it to a single matrix vector op without realising it lol
08:37:22 dlbh^ joins (~dlbh@50.237.44.186)
08:37:48 <triteraflops> so, inlining
08:37:54 × eggplantade quits (~Eggplanta@2600:1700:bef1:5e10:99c9:a0a4:f69e:b22e) (Ping timeout: 264 seconds)
08:38:20 <triteraflops> inlining is transparent at the assembly / SSA level
08:38:38 alp joins (~alp@user/alp)
08:40:51 <triteraflops> If there are nonlinear operations between the linear operations, inlining will definitely produce a huge mess. Or at least it could. If the operation is expanded once for every element of the output array
08:41:46 <triteraflops> assembly level SSA optimisation should be able to determine what temporary variables are the most useful to keep and share.
08:44:24 dobblego joins (~dibblego@122-199-1-30.ip4.superloop.com)
08:44:24 × dobblego quits (~dibblego@122-199-1-30.ip4.superloop.com) (Changing host)
08:44:24 dobblego joins (~dibblego@haskell/developer/dibblego)
08:44:50 × machinedgod quits (~machinedg@66.244.246.252) (Remote host closed the connection)
08:45:47 × dibblego quits (~dibblego@haskell/developer/dibblego) (Ping timeout: 255 seconds)
08:45:47 dobblego is now known as dibblego
08:46:07 machinedgod joins (~machinedg@66.244.246.252)
08:48:49 mima joins (~mmh@aftr-62-216-210-68.dynamic.mnet-online.de)
08:49:18 × vglfr quits (~vglfr@88.155.20.3) (Ping timeout: 264 seconds)
08:52:11 × z0k quits (~z0k@206.84.141.12) (Ping timeout: 246 seconds)
08:52:29 ccntrq1 joins (~Thunderbi@dynamic-095-112-145-116.95.112.pool.telefonica.de)
08:53:49 × odnes quits (~odnes@5-203-220-108.pat.nym.cosmote.net) (Quit: Leaving)
08:54:10 × ccntrq quits (~Thunderbi@dynamic-095-112-145-116.95.112.pool.telefonica.de) (Ping timeout: 240 seconds)
08:54:10 ccntrq1 is now known as ccntrq
08:54:11 z0k joins (~z0k@206.84.141.12)
09:04:16 odnes joins (~odnes@5-203-220-108.pat.nym.cosmote.net)
09:04:30 × nate4 quits (~nate@98.45.169.16) (Ping timeout: 240 seconds)
09:05:58 mattil joins (~mattil@helsinki.portalify.com)
09:06:26 gurkenglas joins (~gurkengla@dslb-002-207-014-022.002.207.pools.vodafone-ip.de)
09:09:05 × rembo10_ quits (~rembo10@main.remulis.com) (Quit: ZNC 1.8.2 - https://znc.in)
09:09:24 ccntrq1 joins (~Thunderbi@dynamic-095-112-145-116.95.112.pool.telefonica.de)
09:09:41 rembo10 joins (~rembo10@main.remulis.com)
09:10:54 × ccntrq quits (~Thunderbi@dynamic-095-112-145-116.95.112.pool.telefonica.de) (Ping timeout: 264 seconds)
09:10:54 ccntrq1 is now known as ccntrq
09:16:49 jgeerds joins (~jgeerds@55d45f48.access.ecotel.net)
09:19:18 × xff0x quits (~xff0x@125x103x176x34.ap125.ftth.ucom.ne.jp) (Ping timeout: 264 seconds)
09:21:41 ccntrq1 joins (~Thunderbi@dynamic-095-112-145-116.95.112.pool.telefonica.de)
09:23:55 × ccntrq quits (~Thunderbi@dynamic-095-112-145-116.95.112.pool.telefonica.de) (Ping timeout: 256 seconds)
09:23:55 ccntrq1 is now known as ccntrq
09:30:36 × Vq quits (~vq@90-227-195-41-no77.tbcn.telia.com) (Ping timeout: 244 seconds)
09:30:42 ccntrq1 joins (~Thunderbi@dynamic-095-112-145-116.95.112.pool.telefonica.de)
09:31:54 raehik joins (~raehik@cpc95906-rdng25-2-0-cust156.15-3.cable.virginm.net)
09:31:59 mecharyuujin joins (~mecharyuu@2409:4050:2d4b:a853:8048:c716:f88e:d09f)
09:33:26 × ccntrq quits (~Thunderbi@dynamic-095-112-145-116.95.112.pool.telefonica.de) (Ping timeout: 272 seconds)
09:33:26 ccntrq1 is now known as ccntrq
09:35:53 nate4 joins (~nate@98.45.169.16)
09:37:50 chomwitt joins (~chomwitt@2a02:587:dc0d:e600:4907:a32:4c72:2e8c)
09:39:15 × shriekingnoise quits (~shrieking@201.212.175.181) (Quit: Quit)
09:43:49 ccntrq1 joins (~Thunderbi@dynamic-095-112-145-116.95.112.pool.telefonica.de)
09:45:23 × ccntrq quits (~Thunderbi@dynamic-095-112-145-116.95.112.pool.telefonica.de) (Ping timeout: 268 seconds)
09:45:23 ccntrq1 is now known as ccntrq
09:49:10 × odnes quits (~odnes@5-203-220-108.pat.nym.cosmote.net) (Ping timeout: 240 seconds)
09:49:43 unit73e joins (~emanuel@2001:818:e8dd:7c00:32b5:c2ff:fe6b:5291)
09:53:59 leib joins (~leib@2405:201:900a:f088:60b3:aae8:bd87:1f5f)
09:56:18 ccntrq1 joins (~Thunderbi@dynamic-095-112-145-116.95.112.pool.telefonica.de)
09:56:52 × unit73e quits (~emanuel@2001:818:e8dd:7c00:32b5:c2ff:fe6b:5291) (Ping timeout: 272 seconds)
09:57:59 × ccntrq quits (~Thunderbi@dynamic-095-112-145-116.95.112.pool.telefonica.de) (Ping timeout: 246 seconds)
09:57:59 ccntrq1 is now known as ccntrq
09:58:56 × zaquest quits (~notzaques@5.130.79.72) (Remote host closed the connection)
10:00:32 zaquest joins (~notzaques@5.130.79.72)
10:01:53 × gurkenglas quits (~gurkengla@dslb-002-207-014-022.002.207.pools.vodafone-ip.de) (Ping timeout: 256 seconds)
10:15:57 × stiell quits (~stiell@gateway/tor-sasl/stiell) (Remote host closed the connection)
10:16:36 stiell joins (~stiell@gateway/tor-sasl/stiell)
10:18:48 Surobaki joins (~surobaki@137.44.222.80)
10:19:16 × econo quits (uid147250@user/econo) (Quit: Connection closed for inactivity)
10:22:43 ccntrq1 joins (~Thunderbi@dynamic-095-112-145-116.95.112.pool.telefonica.de)
10:23:11 × raehik quits (~raehik@cpc95906-rdng25-2-0-cust156.15-3.cable.virginm.net) (Ping timeout: 246 seconds)
10:24:26 × vysn quits (~vysn@user/vysn) (Read error: Connection reset by peer)
10:25:22 × ccntrq quits (~Thunderbi@dynamic-095-112-145-116.95.112.pool.telefonica.de) (Ping timeout: 272 seconds)
10:25:22 ccntrq1 is now known as ccntrq
10:34:41 raehik joins (~raehik@cpc95906-rdng25-2-0-cust156.15-3.cable.virginm.net)
10:35:04 <dminuoso> triteraflops: By the way, you might be interested in uniqueness types in Clean, which allows for non-heuristic in-place mutation
10:35:28 × yauhsien quits (~yauhsien@61-231-23-53.dynamic-ip.hinet.net) (Remote host closed the connection)
10:36:50 <dminuoso> triteraflops: So there, the compiler can turn a "produce an altered copy" into "mutate in place" if its safe to do so
10:37:45 × jmdaemon quits (~jmdaemon@user/jmdaemon) (Ping timeout: 248 seconds)
10:38:44 EggGuest joins (~EggGuest@n114-74-2-39.bla3.nsw.optusnet.com.au)
10:39:03 <EggGuest> Hello
10:39:21 × EggGuest quits (~EggGuest@n114-74-2-39.bla3.nsw.optusnet.com.au) (Client Quit)
10:43:11 yauhsien joins (~yauhsien@61-231-23-53.dynamic-ip.hinet.net)
10:47:01 ccntrq1 joins (~Thunderbi@dynamic-095-112-145-116.95.112.pool.telefonica.de)
10:47:39 × cosimone quits (~user@93-44-186-171.ip98.fastwebnet.it) (Remote host closed the connection)
10:48:11 × zmt01 quits (~zmt00@user/zmt00) (Ping timeout: 255 seconds)
10:48:42 × ccntrq quits (~Thunderbi@dynamic-095-112-145-116.95.112.pool.telefonica.de) (Ping timeout: 264 seconds)
10:48:42 ccntrq1 is now known as ccntrq
10:50:30 × geekosaur quits (~geekosaur@xmonad/geekosaur) (Ping timeout: 264 seconds)
10:52:24 geekosaur joins (~geekosaur@xmonad/geekosaur)
10:56:16 PiDelport joins (uid25146@id-25146.lymington.irccloud.com)
10:57:40 cosimone joins (~user@2001:b07:ae5:db26:57c7:21a5:6e1c:6b81)
11:00:59 × merijn quits (~merijn@86-86-29-250.fixed.kpn.net) (Ping timeout: 246 seconds)
11:02:01 ezzieyguywuf joins (~Unknown@user/ezzieyguywuf)
11:03:07 × lortabac quits (~lortabac@2a01:e0a:541:b8f0:1762:327:502f:fc6c) (Quit: WeeChat 2.8)
11:05:12 × coot quits (~coot@213.134.190.95) (Quit: coot)
11:07:18 × raehik quits (~raehik@cpc95906-rdng25-2-0-cust156.15-3.cable.virginm.net) (Ping timeout: 264 seconds)
11:09:10 jakalx parts (~jakalx@base.jakalx.net) ()
11:10:09 raehik joins (~raehik@cpc95906-rdng25-2-0-cust156.15-3.cable.virginm.net)
11:10:29 × nate4 quits (~nate@98.45.169.16) (Ping timeout: 268 seconds)
11:11:02 <vpan> hi, I'm trying to load CSV data using cassava that has separator records, which I'm trying to detect by a `parseRecord` guard in a `FromRecord` instance. The guard condition like `not $ null (v .! 11)` does not work. How to check for emptyness of a field?
11:11:19 king_gs joins (~Thunderbi@187.201.91.195)
11:14:14 × superbil quits (~superbil@1-34-176-171.hinet-ip.hinet.net) (Ping timeout: 265 seconds)
11:14:39 superbil joins (~superbil@1-34-176-171.hinet-ip.hinet.net)
11:15:11 ccntrq1 joins (~Thunderbi@dynamic-095-112-145-116.95.112.pool.telefonica.de)
11:16:47 jakalx joins (~jakalx@base.jakalx.net)
11:17:56 × ccntrq quits (~Thunderbi@dynamic-095-112-145-116.95.112.pool.telefonica.de) (Ping timeout: 272 seconds)
11:17:56 ccntrq1 is now known as ccntrq
11:18:17 × mecharyuujin quits (~mecharyuu@2409:4050:2d4b:a853:8048:c716:f88e:d09f) (Ping timeout: 248 seconds)
11:19:34 mecharyuujin joins (~mecharyuu@2405:204:302a:37df:1901:27c8:4070:e6e2)
11:19:36 <dminuoso> vpan: What do you mean by "does not work"?
11:21:04 <vpan> dminuoso: No instance for (Foldable Parser) arising from a use of ‘null’
11:21:23 <dminuoso> :t null
11:21:24 <lambdabot> Foldable t => t a -> Bool
11:21:42 <dminuoso> (.!) :: FromField a => Record -> Int -> Parser a
11:22:05 <dminuoso> vpan: You probably meant to apply `null` to the result of the parser (v .! 11), not the parser itself
11:22:38 <dminuoso> So do something like `do { r <- v .! 11; guard (not (null r)); .... }`
11:25:48 lyle joins (~lyle@104.246.145.85)
11:27:48 merijn joins (~merijn@c-001-001-027.client.esciencecenter.eduvpn.nl)
11:30:54 ccntrq1 joins (~Thunderbi@dynamic-095-112-145-116.95.112.pool.telefonica.de)
11:30:54 × king_gs quits (~Thunderbi@187.201.91.195) (Read error: Connection reset by peer)
11:32:18 king_gs joins (~Thunderbi@187.201.91.195)
11:32:41 × ccntrq quits (~Thunderbi@dynamic-095-112-145-116.95.112.pool.telefonica.de) (Ping timeout: 268 seconds)
11:32:41 ccntrq1 is now known as ccntrq
11:35:09 × dlbh^ quits (~dlbh@50.237.44.186) (Ping timeout: 268 seconds)
11:36:16 <maerwald[m]> guard always feels lke goto to me
11:39:08 × leeb quits (~leeb@KD106154144179.au-net.ne.jp) (Ping timeout: 246 seconds)
11:39:31 chele_ joins (~chele@user/chele)
11:40:27 chele is now known as Guest5576
11:40:27 × Guest5576 quits (~chele@user/chele) (Killed (strontium.libera.chat (Nickname regained by services)))
11:40:27 chele_ is now known as chele
11:41:25 <dminuoso> maerwald[m]: Sure, that's what short-circuiting `if cond then return ... else ...` does in traditional languages. :)
11:42:33 × mecharyuujin quits (~mecharyuu@2405:204:302a:37df:1901:27c8:4070:e6e2) (Ping timeout: 268 seconds)
11:43:04 mecharyuujin joins (~mecharyuu@2409:4050:2d4b:a853:8048:c716:f88e:d09f)
11:43:05 <dminuoso> It's a bit bizarre you see this negativity towards `goto` in C, while every single `if/then/else` is essentially just a jnz or equivalent in disguise
11:46:23 <vpan> dminuoso: you're right, the process of a Parser becoming the result type is still a bit magic to me. :) Trying to see if I can wrap the do block in a function and use it as a guard condition. Using an `if` in the function definition feels too imperative. :)
11:47:04 ccntrq1 joins (~Thunderbi@dynamic-095-112-145-116.95.112.pool.telefonica.de)
11:47:44 <maerwald[m]> dminuoso: I use goto all over the place in C to jump to cleanup chunks
11:47:48 <maerwald[m]> But it confuses me in Haskell
11:48:10 <dminuoso> vpan: In Haskell we split things that traditional languages conflate into one thing.
11:48:56 × ccntrq quits (~Thunderbi@dynamic-095-112-145-116.95.112.pool.telefonica.de) (Ping timeout: 246 seconds)
11:48:56 ccntrq1 is now known as ccntrq
11:49:02 <dminuoso> vpan: The act of *evaluating* (v .! 11) does not give you the value of the field back. Instead, it can be thought of some parser computation that, if executed against some text, would then give you the field.
11:50:28 × superbil quits (~superbil@1-34-176-171.hinet-ip.hinet.net) (Ping timeout: 265 seconds)
11:51:12 <dminuoso> maerwald[m]: Well you can always ContT into complete confusion.
11:51:43 <dminuoso> Twice the power, 8 times the complexity.
11:51:44 × king_gs quits (~Thunderbi@187.201.91.195) (Read error: Connection reset by peer)
11:52:11 <maerwald[m]> Yeah, ContT I refuse to use
11:52:14 mc47 joins (~mc47@xmonad/TheMC47)
11:53:02 king_gs joins (~Thunderbi@187.201.91.195)
11:53:44 <dminuoso> vpan: The thing is, <- does not "make it become the result", its just syntax sugar around >>=
11:53:57 superbil joins (~superbil@1-34-176-171.hinet-ip.hinet.net)
11:54:28 <vpan> dminuoso: right, binding the result to a variable invokes the "execution" you mentioned, similar to IO actions
11:54:36 <dminuoso> vpan: Not really.
11:54:44 <dminuoso> It doesnt invoke the execution
11:55:06 <dminuoso> look at: (v .! 11) >>= \r -> if (null r) then ... else ...
11:55:12 <dminuoso> It's really rather a kind of continuation
11:55:31 <dminuoso> And the result of >>= computes a larger, more complex "execution"
11:55:39 <dminuoso> But it doesnt "invoke" it
11:56:02 <maerwald[m]> Aren't nix derivations a form of ContT? xD
11:56:22 <dminuoso> maerwald[m]: Mmm in what sense?
11:56:32 <dminuoso> Derivations are just simplistic functions
11:58:53 <maerwald[m]> dminuoso: what do you think of PEP 383
11:59:13 <vpan> dminuoso: ok, so binding to a name is not the point at which the execution happens, we continue to build boxes upon boxes until the result is required and that's when all the boxes spring to life :)
11:59:45 <dminuoso> vpan: https://gist.github.com/dminuoso/3f700d36912f5a2932dc9c476d9ede3d
12:00:01 <dminuoso> vpan: Do you agree that the mere act of writing Recipe3 does not actually *do* *cooking*?
12:00:29 <vpan> sure
12:00:42 <dminuoso> And similarly, Recipe3 itself is not *actual* *cooking* right?
12:00:44 <dminuoso> It's just a recipe
12:01:16 shiraeeshi joins (~shiraeesh@46.34.206.119)
12:02:16 × merijn quits (~merijn@c-001-001-027.client.esciencecenter.eduvpn.nl) (Ping timeout: 272 seconds)
12:02:30 <vpan> the part that you feed a result you don't yet have feels a bit strange, but I guess that's how lazy evaluation works
12:02:42 <dminuoso> It has nothing to do with lazy evaluation
12:02:47 <dminuoso> vpan: Is recipe3 lazy?
12:03:30 <dminuoso> Let me make a less convoluted example
12:03:38 <dminuoso> Or rather
12:03:55 <dminuoso> "Given some whipped eggs, add sugar and whip in a bowl" this description itself has nothing to do with lazyness
12:04:16 <dminuoso> It's a kind of continuation, where you assume that by some undefined process you already have whipped eggs, how do you carry on
12:05:11 <dminuoso> Traditional programming works the same. In C, if a variable `x` is in scope, and you refer to it, you assume that, by some undefined prior process, x has been populated by a value.
12:05:50 <dminuoso> This is sometimes even done explicitly, you may know it as callback style
12:06:32 <geekosaur> conversely, purescript uses this same mechanism and is strict
12:06:54 × superbil quits (~superbil@1-34-176-171.hinet-ip.hinet.net) (Ping timeout: 265 seconds)
12:07:20 superbil joins (~superbil@1-34-176-171.hinet-ip.hinet.net)
12:07:30 × mjacob quits (~mjacob@adrastea.uberspace.de) (Ping timeout: 240 seconds)
12:07:51 × stiell quits (~stiell@gateway/tor-sasl/stiell) (Ping timeout: 268 seconds)
12:08:06 <dminuoso> vpan: Note that this "assuming you have something, what do you do with it" is simply described by just a function.
12:08:33 <[Leary]> Yeah, it's not about laziness. It's just clever use of higher order functions (taking functions as arguments) to pretend at having a singular result at hand. Then, (>>=) takes that function and maybe doesn't use it at all, maybe applies it multiple times, etc.
12:08:35 <dminuoso> `\f -> <expr>` could be read: given some `f`, give <expr>
12:09:21 mjacob joins (~mjacob@adrastea.uberspace.de)
12:10:55 merijn joins (~merijn@c-001-001-027.client.esciencecenter.eduvpn.nl)
12:11:25 jpds1 is now known as jpds
12:12:18 <dminuoso> maerwald[m]: Im really unsure, I think PEP383 is ill-guided in the sense that the real problem its trying to address is lack of a bytestring type.
12:12:28 <dminuoso> But that's only after a skim of the PEP
12:12:35 <vpan> dminuoso: my understanding is that in C if a `x` is assigned the result of a function call, that function is called at the time x is assigned, i.e. when the assignment statement is executed
12:12:54 <dminuoso> (or well, not lack of a bytestring type, but lack of mandating bytestring in in the filesystem apis)
12:13:27 × lisbeths quits (uid135845@id-135845.lymington.irccloud.com) (Quit: Connection closed for inactivity)
12:14:01 <dminuoso> vpan: https://gist.github.com/dminuoso/fd4f2f90e228012b719559ebac1f9ec4
12:14:10 <merijn> vpan: Not really if we wanna be super precise
12:14:27 × cosimone quits (~user@2001:b07:ae5:db26:57c7:21a5:6e1c:6b81) (Remote host closed the connection)
12:14:35 <dminuoso> Ah sorry bad example
12:14:41 <dminuoso> Updated.
12:14:50 <merijn> vpan: C has explicit "sequence points" where "everything that happens before the sequence point is guaranteed to be finished happening after"
12:14:56 × misterfish quits (~misterfis@ip214-130-173-82.adsl2.static.versatel.nl) (Ping timeout: 272 seconds)
12:15:04 <merijn> vpan: But in between sequence points, the ordering and observability is unspecified
12:15:06 <dminuoso> vpan: Do you note how the mere line 5 merely assumes that `somewhere before, x has been populated by a value, we dont know how and dont care why`
12:15:31 <dminuoso> vpan: we could precisely describe this relationship with a function (or even a c routine): \z -> anotherFunc z
12:16:18 <dminuoso> In fact `anotherFunc` already is exactly that routine.
12:16:46 <dminuoso> It doesnt know how and where its parameter comes from, its just a mere description of "what to do assuming it had that parameter"
12:16:54 <vpan> right, the lamba makes the "given z, invoke anotherFunc" more explicit
12:17:22 <dminuoso> In Haskell we codify all these relationships in IO effects which such lambdas.
12:17:28 <dminuoso> (assuming you have any such dependency)
12:17:34 <dminuoso> : (>>)
12:17:39 <dminuoso> :t (>>)
12:17:40 <lambdabot> Monad m => m a -> m b -> m b
12:17:58 <dminuoso> Lets you talk about sequencing two actions where you discard the result of the first, which is roughly the equivalent of calling a routine in C but not assigning the value to a binder.
12:18:34 <dminuoso> For everything else, we just provide it into a lambda instead, and then the lambda by scoping can provide the "result" to all subsequent computations
12:19:45 <shiraeeshi> discussing lambdas, huh
12:19:50 <shiraeeshi> gud, gud
12:20:07 <merijn> tbh, I would strongly caution against all these references to C :p
12:20:18 <merijn> As they mostly don't seem to actually be references to C :p
12:20:21 stiell joins (~stiell@gateway/tor-sasl/stiell)
12:20:35 × MajorBiscuit quits (~MajorBisc@c-001-001-031.client.tudelft.eduvpn.nl) (Ping timeout: 244 seconds)
12:21:14 <shiraeeshi> I'm reading Purely Functional Data Structures y Chris Okasaki
12:21:20 <shiraeeshi> *by
12:21:26 <arahael> I thought the C compiler is basically allowed to ignore sequence point if it determines it's not actually significant?
12:21:49 <merijn> arahael: Man, I can't answer that
12:21:50 cosimone joins (~user@93-44-186-171.ip98.fastwebnet.it)
12:21:59 <merijn> because C's memory model is the stuff of Lovecraftian nightmares
12:22:08 <dminuoso> arahael: In C things there's an as-if rule
12:22:10 <merijn> I'd literally rather quit than be forced to give any definitive answer on that
12:22:19 <arahael> dminuoso: Right, it's that rule I'm thinking about there.
12:22:28 mikoto-chan joins (~mikoto-ch@esm-84-240-99-143.netplaza.fi)
12:22:32 <shiraeeshi> I was surprised when I learned that the examples are in Standard ML, and examples in Haskell are in the appendix
12:22:49 <shiraeeshi> I thought that the book would only use Haskell
12:23:06 leeb joins (~leeb@KD106154144179.au-net.ne.jp)
12:23:37 <dminuoso> arahael: *Very* roughly, outside of `volatile` and memory barriers, it can - for the most part - do whatever it wants to as long as you cant tell the difference.
12:24:12 coot joins (~coot@213.134.190.95)
12:24:17 <dminuoso> Contrary to popular belief, C operates on an abstract machine and even has notions of "objects and types in memory"
12:24:26 bontaq joins (~user@ool-45779fe5.dyn.optonline.net)
12:24:28 <dminuoso> (That is, the semantics are defined on an abstract machine)
12:24:30 <dminuoso> It'
12:24:35 <arahael> I knew C has an abstract machine, but I didn't know it had that notion.
12:24:35 <dminuoso> It's most definitely not a high level assembler language.
12:24:46 <dminuoso> You can observe this in the aliasing rules for example
12:25:21 <arahael> I'm not actually aware of teh aliasing rules. :(
12:25:32 <arahael> I just know that aliasing is a thing in C.
12:25:42 × king_gs quits (~Thunderbi@187.201.91.195) (Ping timeout: 272 seconds)
12:26:08 <dminuoso> So if you have a pointer to a struct T, you are not allowed to treat the memory the pointer points at as struct U.
12:26:12 <maerwald[m]> dminuoso: https://www.reddit.com/r/haskell/comments/vivjdo/abstract_filepath_coming_soon
12:26:16 <merijn> arahael: Strict aliasing rules says that two pointers with diferent types where neither is "char" *cannot* overlap or it's UB
12:26:23 <dminuoso> If you do this you're in undefined behavior, and in fact compilers will generate all kinds of broken assembly if you do this.
12:26:30 <dminuoso> Without any diagnostics.
12:26:46 <arahael> Nice. :)
12:27:01 <arahael> (But I'm expecting that there's a whole bunch of exceptions people end up using in practice)
12:27:02 <dminuoso> Things like reordering writes after reads
12:27:07 <dminuoso> Stuff you really dont expect. :)
12:27:21 <arahael> :)
12:27:45 <dminuoso> arahael: Sure, people often use it because they dont know its disallowed, and it just so happens the compiler (in that version used by the author) doesnt apply aggressive optimizations in those regions.
12:28:09 <dminuoso> Ive found about a dozen of such bugs by accident when working with C
12:28:15 <arahael> Not surprising. :(
12:28:23 <arahael> I try to avoid C in business projects.
12:28:30 <arahael> (I prefer literally anything else)
12:28:46 <dminuoso> merijn: By the way, they *can* overlap.
12:28:49 <dminuoso> That's not the problem.
12:29:43 <merijn> dminuoso: It's UB if they do
12:29:45 <dminuoso> No its not.
12:29:57 <dminuoso> As bper 6.5p7, the wording is:
12:30:16 <dminuoso> An object shall have its *stored* *value* *accessed* only by an lvalue expression that has one of the following types: [...]
12:30:21 <dminuoso> (emphasis added by me)
12:30:40 <dminuoso> It's the actual value access that's problematic, pointers can overlap
12:30:47 <dminuoso> Unless the pointers are function pointers, they may not overlap
12:30:57 <merijn> dminuoso: You can't access pointers through the wrong type, sure
12:31:09 kuribas joins (~user@ip-188-118-57-242.reverse.destiny.be)
12:31:17 <merijn> dminuoso: But strict aliasing refers to the fact that pointers of different types in the same function scope cannot reference the same memory
12:31:41 <merijn> dminuoso: It's violated so commonly most compilers don't actually assume people follow strict aliasing
12:31:54 <dminuoso> merijn: The C standard is carefully phrased to not talk about pointers, because its about the abstract memory model rather
12:31:57 <dminuoso> — a type compatible with the effective type of the object,
12:32:14 yrlnry joins (~yrlnry@pool-108-2-150-109.phlapa.fios.verizon.net)
12:32:20 <dminuoso> merijn: thats not true
12:32:29 <dminuoso> Unless you exclude both clang and GCC from "most compilers"
12:32:49 <dminuoso> Both will generate "buggy" (fsvo buggy in UB-land) code if you violate this rule
12:32:56 alejandro joins (~alejandro@47.48.23.95.dynamic.jazztel.es)
12:33:27 <dminuoso> It's mostly read/write dependency reordering that happens
12:33:42 <merijn> dminuoso: gcc disabled strict aliasing for decades, maybe they changed very recently
12:34:55 <dminuoso> merijn: -fstrict-aliasing has been part of -O2 for a long time
12:36:18 × yrlnry quits (~yrlnry@pool-108-2-150-109.phlapa.fios.verizon.net) (Ping timeout: 240 seconds)
12:36:28 × leib quits (~leib@2405:201:900a:f088:60b3:aae8:bd87:1f5f) (Ping timeout: 272 seconds)
12:37:14 ccntrq1 joins (~Thunderbi@dynamic-095-112-145-116.95.112.pool.telefonica.de)
12:38:38 × ccntrq quits (~Thunderbi@dynamic-095-112-145-116.95.112.pool.telefonica.de) (Ping timeout: 246 seconds)
12:38:38 ccntrq1 is now known as ccntrq
12:43:10 vglfr joins (~vglfr@46.96.172.76)
12:43:38 × jgeerds quits (~jgeerds@55d45f48.access.ecotel.net) (Ping timeout: 240 seconds)
12:45:57 ccntrq1 joins (~Thunderbi@dynamic-095-112-145-116.95.112.pool.telefonica.de)
12:47:18 × ccntrq quits (~Thunderbi@dynamic-095-112-145-116.95.112.pool.telefonica.de) (Ping timeout: 240 seconds)
12:47:18 ccntrq1 is now known as ccntrq
12:47:47 zmt00 joins (~zmt00@user/zmt00)
12:48:18 × coot quits (~coot@213.134.190.95) (Ping timeout: 240 seconds)
12:48:47 × mikoto-chan quits (~mikoto-ch@esm-84-240-99-143.netplaza.fi) (Ping timeout: 246 seconds)
12:48:55 <kuribas> People complain about compilation being slow, but do they factor in the time of not writing and running tests that we get from the high level language?
12:49:07 × alejandro quits (~alejandro@47.48.23.95.dynamic.jazztel.es) (Quit: Leaving)
12:49:35 <kuribas> yeah, no compilation time in python/clojure/javascript, but the testsuite takes 15 minutes then.
12:50:21 <maerwald[m]> kuribas: not writing tests? Uhm
12:50:52 <dminuoso> I dont think the type of errors caught by GHCs type system are usually explicitly covered in tests.
12:50:54 <kuribas> maerwald[m]: I mean the time you gain from tests you don't need to write.
12:51:03 <kuribas> dminuoso: for sure they are.
12:51:14 <dminuoso> Not sure what kind of tests you encode into the type system them.
12:51:19 mikoto-chan joins (~mikoto-ch@esm-84-240-99-143.netplaza.fi)
12:51:21 <kuribas> dminuoso: you end up testing pretty much everything in a dynamic language.
12:51:28 <kuribas> Or be happy with bugs in production.
12:51:46 <dminuoso> Sure, but there's no sensible way of writing a test suite that ensures `f is called with a string`
12:51:47 <kuribas> dminuoso: I don't *encode* tests in the type system.
12:52:00 <geekosaur> judging by the javascript console in my browser, runtime bugs are ignored a lot >.>
12:52:02 <dminuoso> Other than just testing all code paths that involve `f` and assert no exception is raised.
12:52:39 <kuribas> dminuoso: yeah, you don't do that. You just try to cover as much code paths as possible and hope nothing breaks.
12:52:50 <kuribas> dminuoso: you don't particularly test if that field has a string.
12:53:00 <dminuoso> kuribas: Covering as many code paths as possible is sensible in Haskell as well.
12:53:01 <kuribas> THough you could put runtime assertions in important places.
12:53:10 <dminuoso> In Haskell you have undefined in pure code and IO exceptions in impure code.
12:53:18 <dminuoso> The slow compilation time does not save you from writing tests.
12:53:21 <merijn> kuribas: I mean, faster compilation time is always better
12:53:32 <maerwald[m]> dminuoso: String ought to be valid Unicode right? Riiight? xD
12:53:40 <kuribas> dminuoso: I usually have a few tests, but rely on the type system to garantee the consistency. And I test on the repl a lot.
12:53:43 <maerwald[m]> No need to test
12:53:50 <merijn> I don't think GHC's compiles times are prohibitive to stop me from using it. But at the same time if it was 10x slower this would be a good QoL improvement to me
12:54:01 <maerwald[m]> Oh wait, you can encode arbitrary invalid surrogate pairs with string
12:54:03 <maerwald[m]> Oops
12:54:19 <kuribas> merijn: if it is free for me, sure. But what if it means I have to give up on nice abstactions?
12:54:22 <dminuoso> And to be fair, -O0 or disabling code generation gets you very far in terms of responsiveness for rapid development
12:54:30 <kuribas> good point.
12:54:31 <dminuoso> (And Im sure HLS doese these things)
12:55:17 <dminuoso> At that point it's mostly just the ability to apply a rapid fix in production that is limited by compilation time with optimizations enabled
12:55:19 <maerwald[m]> The sad truth is the type system doesn't enforce most invariants
12:55:27 <kuribas> Also, a type checker in the IDE usually doesn't compile the whole system.
12:56:08 <kuribas> maerwald[m]: I mean, you *can*, with dependent type. But probably shouldn't.
12:56:33 × mikoto-chan quits (~mikoto-ch@esm-84-240-99-143.netplaza.fi) (Ping timeout: 268 seconds)
12:56:38 <kuribas> maerwald[m]: having some runtime checks is just fine.
12:56:48 <kuribas> you can always weight costs and benifits.
12:56:54 <maerwald[m]> You mean TESTS
12:57:06 <kuribas> for example.
12:57:09 <shiraeeshi> finding bugs at compile time works in case of Rust
12:57:23 <shiraeeshi> borrow checker and all that
12:57:38 <maerwald[m]> A small subset of bugs
12:57:39 <merijn> kuribas: The thing is that GHC 8.x is far slower that it needs to be
12:57:40 <shiraeeshi> it's a killer feature of Rust
12:58:19 <merijn> kuribas: Nobody is arguing to remove features to speed up compile times, it's just that nobody really put in effort to get things fast. Althought the 9.x series have seen pretty big improvements
12:58:44 <kuribas> subsumption?
12:59:10 <merijn> unrelated to that
12:59:11 <geekosaur> a decent part of that was discovering and fixing some "laziness leaks"
12:59:22 <merijn> Just boring old plumbing and engineering work
12:59:24 <shiraeeshi> I also heard a saying "in languages like haskell, if it compiles, it works"
12:59:26 × mattil quits (~mattil@helsinki.portalify.com) (Remote host closed the connection)
12:59:28 <merijn> Profiling hotspots, improving them
12:59:34 pleo joins (~pleo@user/pleo)
12:59:35 <geekosaur> shiraeeshi, one could hope
12:59:41 <merijn> shiraeeshi: I mean, it's obviously bullshit. But also, kinda not
12:59:50 <kuribas> shiraeeshi: it works, but maybe not efficiently :)
13:00:08 <merijn> shiraeeshi: It's mostly that you get to focus your testing efforts on less boring kinds of tests
13:00:14 <maerwald[m]> merijn: it's pure BS
13:00:22 <kuribas> for me more like, it "usually" works.
13:00:27 <kuribas> Or "most of the code works".
13:00:54 <merijn> maerwald[m]: Overall my compiling code is a closer approximation of "works correctly" than it has even been in C/C++/Python
13:00:55 <kuribas> for clojure, it's "it "usually" doesn't work. and "most of the code doesn't work".
13:01:13 <merijn> Of course claiming "compiling code is bug free" is nonsense, but that should be obvious to anyone
13:01:27 <merijn> But I waste a whole lot less time chasing down boring stupid shit in haskell
13:01:33 yrlnry joins (~yrlnry@pool-108-2-150-109.phlapa.fios.verizon.net)
13:01:42 <kuribas> also, "after refactoring, it very likely works.", and in clojure: "after refactoring, it very likely is broken".
13:01:43 <merijn> and like, 70% of my debugging efforts in C/C++ go into said "stupid shit"
13:01:51 <kuribas> merijn: yeah this
13:01:57 <maerwald[m]> merijn: I've spend a lot of time chasing boring bugs in Haskell
13:02:00 <merijn> So overall, it *feels* much more correct
13:02:23 <merijn> maerwald[m]: Well, find us 50-100 more anecdotal comments and we can discuss it :p
13:02:44 <maerwald[m]> It's just that refactoring in Haskell is usually less error prone
13:02:58 <maerwald[m]> But my python prototypes don't have significantly more bugs
13:03:49 <maerwald[m]> E.g. Python doesn't use PEP 383 incorrectly like Haskell
13:04:46 king_gs joins (~Thunderbi@187.201.91.195)
13:06:03 progress__ joins (~fffuuuu_i@45.112.243.220)
13:06:33 <kuribas> maerwald[m]: I find my clojure code much harder to maintain.
13:07:10 <maerwald[m]> kuribas: agreed
13:07:21 <kuribas> and clojure > Python
13:07:51 <maerwald[m]> I will never forget how removing a bracket didn't cause a compile error, but made the webpage go blank when clicking a button
13:07:58 <maerwald[m]> Clojurescript is cancer
13:08:14 <zzz> in haskell, debugging is called "learning"
13:08:39 × litharge quits (litharge@libera/bot/litharge) (Quit: restarting)
13:09:07 litharge joins (litharge@libera/bot/litharge)
13:09:47 × azimut quits (~azimut@gateway/tor-sasl/azimut) (Quit: ZNC - https://znc.in)
13:10:31 azimut joins (~azimut@gateway/tor-sasl/azimut)
13:11:58 <kuribas> debugging in haskell for me is mosly putting trace statements.
13:12:11 <kuribas> And trying out functions on the repl.
13:12:33 <kuribas> there are no advanced debugging tools, but I rarely miss them.
13:13:41 <maerwald[m]> kuribas: ghc-debug
13:14:17 misterfish joins (~misterfis@87.215.131.98)
13:14:38 <shiraeeshi> kuribas: how about labeling cost centers and then viewing statistics?
13:14:50 <kuribas> shiraeeshi: that's profiling :)
13:14:52 <shiraeeshi> to debug space leaks
13:15:38 Unicorn_Princess joins (~Unicorn_P@93-103-228-248.dynamic.t-2.net)
13:15:40 <kuribas> maerwald[m]: cool, I didn't know about that.
13:15:48 <shiraeeshi> I wonder if profiling skills are more needed for Haskell developers than others
13:15:57 <shiraeeshi> due to laziness
13:16:02 × superbil quits (~superbil@1-34-176-171.hinet-ip.hinet.net) (Ping timeout: 265 seconds)
13:16:23 MajorBiscuit joins (~MajorBisc@wlan-145-94-167-213.wlan.tudelft.nl)
13:16:27 superbil joins (~superbil@1-34-176-171.hinet-ip.hinet.net)
13:19:13 <kuribas> shiraeeshi: in my opinion: no
13:20:13 <shiraeeshi> if only we could gather statistics about space leaks from every running haskell program
13:20:14 dlbh^ joins (~dlbh@50.237.44.186)
13:20:27 <kuribas> I have not yet seen space leaks in my programs.
13:20:33 <shiraeeshi> to be able to tell how often they occur compared to other languages
13:21:47 <shiraeeshi> kuribas: you are good at avoiding them?
13:22:04 <shiraeeshi> or they just don't occur and you don't even worry about them?
13:22:45 <kuribas> both probably?
13:23:04 <kuribas> well I mean, the programs I write are not sensitive to them maybe.
13:23:25 <kuribas> Though I avoid pitfalls, like foldl instead of foldl'.
13:24:13 <dolio> I think it's just not very hard to preemptively avoid most space leaks once you have experience.
13:24:40 <dolio> Doesn't take much thought.
13:26:18 <shiraeeshi> I find myself discussing space leaks more and more lately
13:26:29 <shiraeeshi> but actually I wanted to ask about something else
13:26:29 lortabac joins (~lortabac@2a01:e0a:541:b8f0:1762:327:502f:fc6c)
13:27:00 ccntrq1 joins (~Thunderbi@dynamic-095-112-145-116.95.112.pool.telefonica.de)
13:27:03 <shiraeeshi> I'm reading a Purely Functional Data Structures by Chris Okasaki
13:27:27 <shiraeeshi> I'm at the beginning
13:27:41 <shiraeeshi> Leftist heaps
13:28:01 <shiraeeshi> there is an exercise 3.3
13:28:07 × superbil quits (~superbil@1-34-176-171.hinet-ip.hinet.net) (Ping timeout: 265 seconds)
13:28:14 <shiraeeshi> it's about proving that fromList runs in O(n) time
13:28:46 <shiraeeshi> it says: "Instead of merging the heaps in one
13:28:46 <shiraeeshi> right-to-left or left-to-right pass using foldr or foldl, merge the heaps in [log n] passes, where each
13:28:46 <shiraeeshi> pass merges adjacent pairs of heaps."
13:28:57 × ccntrq quits (~Thunderbi@dynamic-095-112-145-116.95.112.pool.telefonica.de) (Ping timeout: 248 seconds)
13:28:57 ccntrq1 is now known as ccntrq
13:29:13 <maerwald[m]> dolio: not everything is a space leak. Evaluating deep thunks in hot loops can kill your performance in orders of 10x magnitudes and forcing the result at the callsite won't fix it
13:30:15 <maerwald[m]> It's really hard to debug it. There is no space leaks, the profiler doesn't tell you what's going on either
13:30:48 <dolio> My comment was about space leaks, because that's what the topic was.
13:31:33 waleee joins (~waleee@2001:9b0:213:7200:cc36:a556:b1e8:b340)
13:31:34 superbil joins (~superbil@1-34-176-171.hinet-ip.hinet.net)
13:31:40 coot joins (~coot@213.134.190.95)
13:33:08 <shiraeeshi> earlier in the chapter the book says that merge runs in O(log n) time
13:33:32 <shiraeeshi> and we invoke merge ceil(log n) times, right?
13:33:52 <shiraeeshi> how does it follow that fromList runs in O(n) time?
13:35:53 <maerwald[m]> dolio: the topic was about profiling too ;)
13:36:11 <dolio> When I commented people were specifically talking about space leaks, and that's why I wrote what I wrote.
13:36:43 eggplantade joins (~Eggplanta@2600:1700:bef1:5e10:99c9:a0a4:f69e:b22e)
13:37:46 <shiraeeshi> am I not seeing something obvious here?
13:37:50 <[Leary]> shiraeeshi: I'm not saying this is it, but you should be aware that log n is O(n) and so is (log n)^2. O(n) time technically means linear or sub-linear.
13:38:34 <geekosaur> besides which, the time is likely dominated by the time it takes to traverse the list which is by definition O(n)
13:39:23 <shiraeeshi> [Leary]: yeah, that's what's not clear to me: how do you go from log n to n? because it seems to me that you should make it power of two
13:40:22 <shiraeeshi> geekosaur: the exercise says that you should merge adjacent pairs instead of using folds, if I understood you correctly
13:40:50 <dolio> shiraeeshi: When you use foldl/r, you'll be repeatedly merging a small things into increasingly large things, to the cost of each operation will approach log n, as the size of the large thing is O(n) for much of the time.
13:40:56 <shiraeeshi> you divide the list to pairs and merge them, and you keep doing that until only one heap left
13:41:22 <[Leary]> This is just the definition of big-O; it's not an "equals", it's a "less than or equals".
13:41:26 × eggplantade quits (~Eggplanta@2600:1700:bef1:5e10:99c9:a0a4:f69e:b22e) (Ping timeout: 255 seconds)
13:42:38 <[Leary]> So anything that's O(n) is autamtatically O(n^2), but not necessarily O(log n).
13:42:43 <[Leary]> (e.g.)
13:43:15 <maerwald> dolio: and when you were talking about profiling space leaks, I wrote what I wrote
13:43:32 <shiraeeshi> [Leary]: I see your point, but I think the author expects something like an algebraic proof that concludes that the time is O(n)
13:44:01 <dolio> Pair-wise, you'll be repeatedly merging things of the same size to double their size, and most of the merges will not be of things that are O(n) size.
13:44:28 <dolio> Or at least, needn't be counted that way.
13:44:57 × waleee quits (~waleee@2001:9b0:213:7200:cc36:a556:b1e8:b340) (Ping timeout: 248 seconds)
13:45:19 <shiraeeshi> dolio: hmm, "repeatedly merging small things into increasingly large things" sounds like it could lead to the right answer, but I don't see how would you prove the O(n) time. I think I'm missing something.
13:46:17 × jespada quits (~jespada@cpc121022-nmal24-2-0-cust171.19-2.cable.virginm.net) (Ping timeout: 256 seconds)
13:47:06 <shiraeeshi> oh, right. the "n" when we say that merge takes O(log n) time is not the same as "n" when we say that fromList takes O(n) time
13:47:25 <dolio> For example, with foldl, by the time you've processed half the list, your accumulator has size n/2, so the remaining n/2 merges take ~log n time.
13:47:44 × Pickchea quits (~private@user/pickchea) (Ping timeout: 268 seconds)
13:47:46 <dolio> For the pair-wise merges, only the last merge has size n/2.
13:48:18 <dolio> The previous one had n/4, before that n/8, which is like n/2^k.
13:48:30 xff0x joins (~xff0x@b133147.ppp.asahi-net.or.jp)
13:49:03 mikoto-chan joins (~mikoto-ch@esm-84-240-99-143.netplaza.fi)
13:50:30 × Colere quits (~colere@about/linux/staff/sauvin) (Ping timeout: 264 seconds)
13:51:00 waleee joins (~waleee@2001:9b0:213:7200:cc36:a556:b1e8:b340)
13:51:13 jespada joins (~jespada@cpc121022-nmal24-2-0-cust171.19-2.cable.virginm.net)
13:51:31 <shiraeeshi> ok, so let's say we have a list with 16 elements
13:51:57 <shiraeeshi> there are 8 pairs, so we invoke merge 8 times
13:53:00 <shiraeeshi> the size of heaps to merged is 1, so merge takes O(1) time (or we can say O(log 1))
13:53:08 <dolio> So many more of the merges happen at sizes that are 'effectively constant', and only an effectively constant number at the end are O(n). At least, that's sort of an intuitive idea behind it.
13:53:31 <shiraeeshi> so going from 16 elements to 8 elements takes 8*O(1) time
13:54:30 <shiraeeshi> now going from 8 elements to 4 elements takes 4*O(log2) = 4*O(1) time
13:55:08 × alp quits (~alp@user/alp) (Ping timeout: 268 seconds)
13:55:08 × mon_aaraj quits (~MonAaraj@user/mon-aaraj/x-4416475) (Ping timeout: 268 seconds)
13:55:22 <shiraeeshi> going from 4 elements to 2 elements takes 2*O(log4) = 2*O(2)
13:56:07 <shiraeeshi> and from 2 elements to 1 element should take O(log 8) = O(3) time
13:56:19 × cfricke quits (~cfricke@user/cfricke) (Quit: WeeChat 3.5)
13:56:59 × progress__ quits (~fffuuuu_i@45.112.243.220) (Ping timeout: 268 seconds)
13:57:51 <shiraeeshi> total time is the sum of all those iterations
13:58:49 <shiraeeshi> sum (for i from 1 to half of n) (2*i * O(log (n - i)))
13:59:05 <geekosaur> I don't think you get to do math with big-Os that way
13:59:18 × Surobaki quits (~surobaki@137.44.222.80) (Ping timeout: 240 seconds)
13:59:39 <geekosaur> in particular, your "8*O(1)" is O(n/2) which is O(n) with a constant factor (that drops out) of 0.5
13:59:46 <dolio> Yeah, I mean, reasoning about a fixed example is erroneous, because it's all constant. But it helps see the pattern.
13:59:49 cfricke joins (~cfricke@user/cfricke)
13:59:56 <[Leary]> Sounds like it's amenable to induction, if you want to do that a bit more rigourously.
14:01:01 Surobaki joins (~surobaki@137.44.222.80)
14:01:52 mon_aaraj joins (~MonAaraj@user/mon-aaraj/x-4416475)
14:01:55 <dolio> The sum is what to think about, though. Then show that it's O(n).
14:02:00 <dolio> I think.
14:03:36 nate4 joins (~nate@98.45.169.16)
14:05:44 × dlbh^ quits (~dlbh@50.237.44.186) (Remote host closed the connection)
14:06:42 <dolio> Or maybe, `Σ(i = 1..log n) 2^(n-i) * i` is better?
14:08:42 causal joins (~user@50.35.83.177)
14:08:42 × nate4 quits (~nate@98.45.169.16) (Ping timeout: 268 seconds)
14:08:42 × waleee quits (~waleee@2001:9b0:213:7200:cc36:a556:b1e8:b340) (Ping timeout: 268 seconds)
14:09:49 <dolio> Oh, not 2^(n-i), 2^(log n - i)
14:12:15 × lortabac quits (~lortabac@2a01:e0a:541:b8f0:1762:327:502f:fc6c) (Quit: WeeChat 2.8)
14:12:18 waleee joins (~waleee@2001:9b0:213:7200:cc36:a556:b1e8:b340)
14:12:21 <dolio> Then turn it into a similar definite integral and ask wolfram alpha to solve it, and say something about correspondences between sums and integrals.
14:14:44 jgeerds joins (~jgeerds@55d45f48.access.ecotel.net)
14:15:08 <shiraeeshi> the way I see it is, we have a triangle or a pyramid of decreasing number of increasing coefficients
14:15:38 <shiraeeshi> but how do you go from that pyramid to n is not clear to me
14:16:37 <shiraeeshi> (I should remind myself that it's not exactly n, but something like n in an asymptotic sense)
14:17:18 × Surobaki quits (~surobaki@137.44.222.80) (Ping timeout: 240 seconds)
14:18:06 <shiraeeshi> wait, all the iterations take the same time
14:18:11 Surobaki joins (~surobaki@137.44.222.80)
14:19:11 × stefan-_ quits (~cri@42dots.de) (Ping timeout: 268 seconds)
14:19:56 <shiraeeshi> no, sounds like it leads to O(n * log n)
14:21:08 <shiraeeshi> perhaps I should try induction
14:22:07 <[Leary]> It's almost always the simplest way, when it applies.
14:22:15 ccntrq1 joins (~Thunderbi@dynamic-095-112-145-116.95.112.pool.telefonica.de)
14:22:39 <dolio> I guess wolfram can just do the sum, actually. No need for integrals.
14:23:37 stefan-_ joins (~cri@42dots.de)
14:23:53 × ccntrq quits (~Thunderbi@dynamic-095-112-145-116.95.112.pool.telefonica.de) (Ping timeout: 248 seconds)
14:25:02 ccntrq joins (~Thunderbi@dynamic-077-006-224-164.77.6.pool.telefonica.de)
14:25:35 [itchyjunk] joins (~itchyjunk@user/itchyjunk/x-7353470)
14:26:26 × ccntrq1 quits (~Thunderbi@dynamic-095-112-145-116.95.112.pool.telefonica.de) (Ping timeout: 246 seconds)
14:28:38 × Surobaki quits (~surobaki@137.44.222.80) (Ping timeout: 240 seconds)
14:31:28 Vq joins (~vq@90-227-195-41-no77.tbcn.telia.com)
14:31:38 × mecharyuujin quits (~mecharyuu@2409:4050:2d4b:a853:8048:c716:f88e:d09f) (Ping timeout: 240 seconds)
14:35:55 Surobaki joins (~surobaki@137.44.222.80)
14:37:56 × vpan quits (~0@212.117.1.172) (Quit: Leaving.)
14:43:36 ccntrq1 joins (~Thunderbi@dynamic-077-006-224-164.77.6.pool.telefonica.de)
14:44:24 × Surobaki quits (~surobaki@137.44.222.80) (Ping timeout: 272 seconds)
14:45:13 × ccntrq quits (~Thunderbi@dynamic-077-006-224-164.77.6.pool.telefonica.de) (Ping timeout: 256 seconds)
14:45:13 ccntrq1 is now known as ccntrq
14:46:31 Surobaki joins (~surobaki@137.44.222.80)
14:46:36 shriekingnoise joins (~shrieking@201.212.175.181)
14:49:59 dsrt^ joins (~dsrt@50.237.44.186)
14:52:54 × raehik quits (~raehik@cpc95906-rdng25-2-0-cust156.15-3.cable.virginm.net) (Ping timeout: 264 seconds)
14:53:13 <dolio> shiraeeshi: BTW, if you have access to Knuth's Concrete Mathematics then Σ_k k*2^k is an example he shows how to solve using the calculus of finite differences. And that's the complicated part of Σ_k (lg n - k)*2^k, which is another way to write the sum.
14:54:15 Sgeo joins (~Sgeo@user/sgeo)
14:55:34 × mon_aaraj quits (~MonAaraj@user/mon-aaraj/x-4416475) (Ping timeout: 268 seconds)
14:55:35 raehik joins (~raehik@cpc95906-rdng25-2-0-cust156.15-3.cable.virginm.net)
14:56:46 × Surobaki quits (~surobaki@137.44.222.80) (Read error: Connection reset by peer)
14:57:14 mon_aaraj joins (~MonAaraj@user/mon-aaraj/x-4416475)
14:59:36 × pleo quits (~pleo@user/pleo) (Ping timeout: 272 seconds)
15:00:34 ccntrq1 joins (~Thunderbi@dynamic-077-006-224-164.77.6.pool.telefonica.de)
15:00:36 × FinnElija quits (~finn_elij@user/finn-elija/x-0085643) (Quit: FinnElija)
15:02:08 × ccntrq quits (~Thunderbi@dynamic-077-006-224-164.77.6.pool.telefonica.de) (Ping timeout: 272 seconds)
15:02:08 ccntrq1 is now known as ccntrq
15:02:11 FinnElija joins (~finn_elij@user/finn-elija/x-0085643)
15:03:54 × king_gs quits (~Thunderbi@187.201.91.195) (Read error: Connection reset by peer)
15:03:55 mecharyuujin joins (~mecharyuu@2409:4050:2d4b:a853:8048:c716:f88e:d09f)
15:08:53 king_gs joins (~Thunderbi@187.201.91.195)
15:11:39 ccntrq1 joins (~Thunderbi@dynamic-077-006-224-164.77.6.pool.telefonica.de)
15:13:22 × cfricke quits (~cfricke@user/cfricke) (Quit: WeeChat 3.5)
15:13:54 × ccntrq quits (~Thunderbi@dynamic-077-006-224-164.77.6.pool.telefonica.de) (Ping timeout: 264 seconds)
15:13:54 ccntrq1 is now known as ccntrq
15:15:46 jao joins (~jao@cpc103048-sgyl39-2-0-cust502.18-2.cable.virginm.net)
15:17:21 <shiraeeshi> dolio: thanks, gonna take a look if I get stuck
15:20:02 jakalx parts (~jakalx@base.jakalx.net) (Error from remote client)
15:20:20 jakalx joins (~jakalx@base.jakalx.net)
15:21:29 <triteraflops> dminuoso: Yeah, I knew about this kind of typing, which is why I thought haskell couldn't do it on its own, without its linear extensions, for example.
15:28:18 × mixfix41 quits (~sdenynine@user/mixfix41) (Ping timeout: 240 seconds)
15:31:16 × ccntrq quits (~Thunderbi@dynamic-077-006-224-164.77.6.pool.telefonica.de) (Ping timeout: 272 seconds)
15:33:12 alp joins (~alp@user/alp)
15:33:57 × hgolden quits (~hgolden2@cpe-172-251-233-141.socal.res.rr.com) (Remote host closed the connection)
15:35:00 fryguybob joins (~fryguybob@cpe-74-67-169-145.rochester.res.rr.com)
15:37:19 hgolden joins (~hgolden2@cpe-172-251-233-141.socal.res.rr.com)
15:38:19 × jrm quits (~jrm@user/jrm) (Quit: ciao)
15:39:21 × jgeerds quits (~jgeerds@55d45f48.access.ecotel.net) (Ping timeout: 268 seconds)
15:39:32 jrm joins (~jrm@user/jrm)
15:40:10 × hgolden quits (~hgolden2@cpe-172-251-233-141.socal.res.rr.com) (Remote host closed the connection)
15:43:30 hgolden joins (~hgolden2@cpe-172-251-233-141.socal.res.rr.com)
15:45:38 × dsrt^ quits (~dsrt@50.237.44.186) (Ping timeout: 240 seconds)
15:47:20 dsrt^ joins (~dsrt@50.237.44.186)
15:50:10 × mcglk quits (~mcglk@131.191.49.120) (Ping timeout: 240 seconds)
15:52:29 eggplantade joins (~Eggplanta@2600:1700:bef1:5e10:99c9:a0a4:f69e:b22e)
15:57:21 <juri_> hmm. bumped GHC to 9.0.2, and now rounded-hw is failing to dependency resolve. :(
15:58:41 juri_ pokes it with a stick.
15:59:33 shapr explodes
15:59:35 mcglk joins (~mcglk@131.191.49.120)
16:00:11 × RudraveerMandal[ quits (~magphimat@2001:470:69fc:105::2:eb9) (Quit: You have been kicked for being idle)
16:00:29 × brettgilio quits (~brettgili@c9yh.net) (Quit: The Lounge - https://thelounge.chat)
16:01:46 <shapr> @seen dons
16:01:47 <lambdabot> I saw dons leaving #haskell 1m 2h 46m 2s ago.
16:01:53 <shapr> huh, a month?
16:02:02 <shapr> I'm glad @seen is working again
16:02:34 brettgilio joins (~brettgili@c9yh.net)
16:02:45 × eggplantade quits (~Eggplanta@2600:1700:bef1:5e10:99c9:a0a4:f69e:b22e) (Remote host closed the connection)
16:02:46 × cherries[m] quits (~cherriesm@2001:470:69fc:105::2:16c0) (Quit: You have been kicked for being idle)
16:02:57 × phuegrvs[m] quits (~phuegrvsm@2001:470:69fc:105::1:65e4) (Quit: You have been kicked for being idle)
16:03:00 pleo joins (~pleo@user/pleo)
16:06:01 × brettgilio quits (~brettgili@c9yh.net) (Client Quit)
16:08:37 <lambdabot> ∿∿∿∿∿∿∿∿∿∿∿∿∿
16:09:11 brettgilio joins (~brettgili@c9yh.net)
16:10:22 Tuplanolla joins (~Tuplanoll@91-159-69-97.elisa-laajakaista.fi)
16:10:32 <shapr> heippa hei Tuplanolla
16:13:23 tzh joins (~tzh@c-24-21-73-154.hsd1.or.comcast.net)
16:13:27 × benin0 quits (~benin@183.82.30.117) (Quit: The Lounge - https://thelounge.chat)
16:14:03 <Tuplanolla> Hey, shapr. What's going on?
16:14:23 <shapr> Oh, delaying getting started on some code. What about you?
16:14:38 <shapr> I'm also trying to figure out the process for becoming a maintainer for a piece of GHC
16:15:02 <shapr> And looking into lambdabot to see how much the code has changed in the past (checks calendar) uh, 20 years
16:17:04 <shapr> Kinda awesome to see the initial import into git from, uh, darcs? : https://github.com/lambdabot/lambdabot/commits?author=shapr
16:17:04 × king_gs quits (~Thunderbi@187.201.91.195) (Read error: Connection reset by peer)
16:17:38 <shapr> Was lambdabot even *in* source control in the beginning? I don't remember.
16:18:43 × MajorBiscuit quits (~MajorBisc@wlan-145-94-167-213.wlan.tudelft.nl) (Ping timeout: 256 seconds)
16:19:21 king_gs joins (~Thunderbi@187.201.91.195)
16:23:20 × [itchyjunk] quits (~itchyjunk@user/itchyjunk/x-7353470) (Remote host closed the connection)
16:24:03 [itchyjunk] joins (~itchyjunk@user/itchyjunk/x-7353470)
16:25:28 × mc47 quits (~mc47@xmonad/TheMC47) (Remote host closed the connection)
16:26:15 <dminuoso> triteraflops: The linear extension is orthogonal (dual even, in a sense) to uniqueness types.
16:26:41 <dminuoso> The uniqueness types in Clean form a secondary/orthogonal type system if I understand it correctly
16:27:05 <dminuoso> And even with linear types, you couldn't do it.
16:27:13 <Tuplanolla> Oh, cool.
16:27:29 <Tuplanolla> I've been breaking another compiler instead. https://github.com/coq/coq/issues?q=is%3Aissue+author%3ATuplanolla
16:28:04 <shapr> Tuplanolla: wow nice! that's a good run of breakage
16:29:31 <Tuplanolla> If only it would end.
16:29:31 <shapr> Also been working on some fun things with cdsmith, we built a MUD https://github.com/cdsmith/ourmud that's backed by a serializable graph database https://github.com/cdsmith/edgy
16:29:33 <lechner> Hi, is there a difference in the "before" and "after"\ code examples for optparse-applicative here? https://blog.ocharles.org.uk/posts/2022-06-22-list-of-monoids-pattern.html
16:29:55 × [itchyjunk] quits (~itchyjunk@user/itchyjunk/x-7353470) (Remote host closed the connection)
16:30:41 <shapr> In the process, we fixed up a few libraries to compile (and work?) with GHC 9.2.2; but I couldn't find a process for uploading "this probably works" kind of releases for packages that haven't been updated in a few years.
16:30:52 × adanwan quits (~adanwan@gateway/tor-sasl/adanwan) (Remote host closed the connection)
16:30:54 <dminuoso> lechner: Not that I can see.
16:31:05 <dminuoso> Think some copy/paste bug stole its way in there
16:31:08 adanwan joins (~adanwan@gateway/tor-sasl/adanwan)
16:31:14 <shapr> @seen ocharles
16:31:14 <lambdabot> I saw ocharles leaving #ghc 8d 18h 19m 34s ago.
16:32:09 [itchyjunk] joins (~itchyjunk@user/itchyjunk/x-7353470)
16:32:47 <Tuplanolla> I still have one side project that's being written in Haskell. Oddly enough, it has to do with the modeling and visualization of a particular theory of psychology.
16:32:50 × [itchyjunk] quits (~itchyjunk@user/itchyjunk/x-7353470) (Remote host closed the connection)
16:33:01 <shapr> well now I want a link
16:33:06 × dextaa quits (~DV@user/dextaa) (Read error: Connection reset by peer)
16:33:18 <shiraeeshi> shapr: what's MUD?
16:33:31 <geekosaur> "multi-user dungeon"
16:33:35 <shapr> multi user dungeon
16:33:36 <shapr> yeah that
16:33:45 <shapr> https://en.wikipedia.org/wiki/MUD
16:34:10 <lechner> dminuoso: thanks! as a beginner, i wasn't sure
16:34:14 × misterfish quits (~misterfis@87.215.131.98) (Ping timeout: 268 seconds)
16:34:16 mastarija joins (~mastarija@2a05:4f46:e02:8c00:3026:72db:30b8:5822)
16:34:25 <shiraeeshi> wow, interesting
16:34:28 <Tuplanolla> It's not public yet, because it doesn't even work, but here is the related organization. https://www.thebowencenter.org/
16:34:52 eggplantade joins (~Eggplanta@2600:1700:bef1:5e10:99c9:a0a4:f69e:b22e)
16:34:54 <Tuplanolla> It's about "Bowen family systems theory".
16:35:02 <shapr> haven't heard of that
16:35:08 <Tuplanolla> Neither had I.
16:35:15 dextaa joins (~DV@user/dextaa)
16:35:51 × coot quits (~coot@213.134.190.95) (Quit: coot)
16:36:22 coot joins (~coot@213.134.190.95)
16:36:30 <shiraeeshi> Tuplanolla: what kind of visualization?
16:36:48 <shiraeeshi> I mean, something like histograms, graphs, something else?
16:36:59 <Tuplanolla> It looks like a dynamical system on a graph, so FGL, GraphViz and Gnuplot are a perfect fit for it.
16:37:30 <Tuplanolla> I just wish there was a little less Henning in them.
16:37:53 <shiraeeshi> it's weird that you use one word graph in english to talk about graphs, and, well, graphs
16:38:19 <shiraeeshi> "graphical graphs" and "graphs as edges-and-nodes"
16:38:27 <Tuplanolla> Well, all three senses are present, so it all works out!
16:39:06 <darkling> I had a lecturer that pronounced them "graaph" (English RP) for the two-axes-and-aline thing, and "graf" (Northern English, short "a") for the nodes-and-edges thing.
16:39:41 <shapr> I'm trying not to make jokes about a Van de Graph generator
16:39:52 <Tuplanolla> Personally, I find "network" and "curve" to be more appropriate.
16:39:59 <shapr> Is there a good binding to matplotlib or other graph visualization library?
16:40:12 <shapr> I never could get my head wrapped around fgl, does it have benefits over alga?
16:40:12 × kuribas quits (~user@ip-188-118-57-242.reverse.destiny.be) (Remote host closed the connection)
16:40:13 <Tuplanolla> No; just bad, shapr.
16:40:22 <shapr> oh, too bad
16:40:25 <shapr> use diagrams instead?
16:40:30 <shapr> or JuicyPixels?
16:40:46 <juri_> ug. fighting juicyPixels build trouble atm.
16:41:26 <shapr> I'm pushing PRs for the StrictCheck and TCache, though I don't expect them to be merged.
16:43:17 jakalx parts (~jakalx@base.jakalx.net) (Error from remote client)
16:43:28 × merijn quits (~merijn@c-001-001-027.client.esciencecenter.eduvpn.nl) (Ping timeout: 272 seconds)
16:44:02 jakalx joins (~jakalx@base.jakalx.net)
16:46:32 × leeb quits (~leeb@KD106154144179.au-net.ne.jp) (Quit: WeeChat 3.4.1)
16:47:11 <shapr> juri_: are you making pretty pictures?
16:47:39 faitz joins (~faitz@169.150.201.38)
16:49:37 <juri_> shapr: I make things. :)
16:49:58 <juri_> i just happen to support output formats that are pixel-based.
16:52:02 × vglfr quits (~vglfr@46.96.172.76) (Ping timeout: 246 seconds)
16:54:34 × mecharyuujin quits (~mecharyuu@2409:4050:2d4b:a853:8048:c716:f88e:d09f) (Quit: Leaving)
16:55:34 kenran joins (~kenran@200116b82b215d00f12ab9e70125a9a1.dip.versatel-1u1.de)
16:57:00 × coot quits (~coot@213.134.190.95) (Quit: coot)
16:57:01 BusConscious joins (~martin@ip5f5bdf00.dynamic.kabel-deutschland.de)
16:57:38 × kenran quits (~kenran@200116b82b215d00f12ab9e70125a9a1.dip.versatel-1u1.de) (Client Quit)
16:57:46 <dminuoso> lechner: I *think* the intent was to display `flag True False (mconcat [f1, f2, f3])` vs `flag True False [f1, f2, f3]`
16:59:08 <dminuoso> Or perhaps `flag True False (f1 <> f2 <> f3)` in the former case
16:59:48 <dminuoso> Judging from the final sentence, I think they meant to use (<>) in the before example
17:02:32 × mikoto-chan quits (~mikoto-ch@esm-84-240-99-143.netplaza.fi) (Ping timeout: 246 seconds)
17:05:14 econo joins (uid147250@user/econo)
17:05:54 merijn joins (~merijn@c-001-001-027.client.esciencecenter.eduvpn.nl)
17:06:13 [itchyjunk] joins (~itchyjunk@user/itchyjunk/x-7353470)
17:07:18 × notzmv quits (~zmv@user/notzmv) (Ping timeout: 264 seconds)
17:09:24 × jpds quits (~jpds@gateway/tor-sasl/jpds) (Ping timeout: 268 seconds)
17:10:18 jpds joins (~jpds@gateway/tor-sasl/jpds)
17:10:56 × merijn quits (~merijn@c-001-001-027.client.esciencecenter.eduvpn.nl) (Ping timeout: 246 seconds)
17:11:18 × tzh quits (~tzh@c-24-21-73-154.hsd1.or.comcast.net) (Ping timeout: 240 seconds)
17:12:30 × Neuromancer quits (~Neuromanc@user/neuromancer) (Ping timeout: 240 seconds)
17:13:43 × mastarija quits (~mastarija@2a05:4f46:e02:8c00:3026:72db:30b8:5822) (Quit: Leaving)
17:13:59 × jpds quits (~jpds@gateway/tor-sasl/jpds) (Remote host closed the connection)
17:14:27 jpds joins (~jpds@gateway/tor-sasl/jpds)
17:16:43 Henson joins (~kvirc@107-179-133-201.cpe.teksavvy.com)
17:19:32 <Henson> has anybody here had experience with the StackBuilders or Serokell consulting companies for Haskell development? If so, would you be interested in telling me about it?
17:24:01 × eggplantade quits (~Eggplanta@2600:1700:bef1:5e10:99c9:a0a4:f69e:b22e) (Remote host closed the connection)
17:24:22 moet joins (~moet@lib-02-subnet-194.rdns.cenic.net)
17:26:52 eggplantade joins (~Eggplanta@108-201-191-115.lightspeed.sntcca.sbcglobal.net)
17:27:37 × king_gs quits (~Thunderbi@187.201.91.195) (Quit: king_gs)
17:29:35 × pleo quits (~pleo@user/pleo) (Quit: quit)
17:30:17 coot joins (~coot@213.134.190.95)
17:30:39 × hnOsmium0001 quits (uid453710@user/hnOsmium0001) (Quit: Connection closed for inactivity)
17:33:11 × faitz quits (~faitz@169.150.201.38) (Quit: Lost terminal)
17:34:54 tzh joins (~tzh@c-24-21-73-154.hsd1.or.comcast.net)
17:39:00 merijn joins (~merijn@c-001-001-027.client.esciencecenter.eduvpn.nl)
17:39:03 × yauhsien quits (~yauhsien@61-231-23-53.dynamic-ip.hinet.net) (Remote host closed the connection)
17:39:37 yauhsien joins (~yauhsien@61-231-23-53.dynamic-ip.hinet.net)
17:41:04 Pickchea joins (~private@user/pickchea)
17:42:26 <maerwald> why am I mind boggled every time I look at 'readsPrec'?
17:43:05 × chele quits (~chele@user/chele) (Remote host closed the connection)
17:44:32 × yauhsien quits (~yauhsien@61-231-23-53.dynamic-ip.hinet.net) (Ping timeout: 268 seconds)
17:44:38 <moet> maerwald: because it should have used Either/Maybe but instead uses a list.
17:44:56 <geekosaur> because it's a crawling horror…
17:45:02 <maerwald> 10 years of Haskell and I still can't write a Read instance by hand
17:45:05 <maerwald> ...
17:46:05 tzh_ joins (~tzh@c-24-21-73-154.hsd1.or.comcast.net)
17:46:18 × mbuf quits (~Shakthi@122.164.15.160) (Quit: Leaving)
17:46:28 × tzh quits (~tzh@c-24-21-73-154.hsd1.or.comcast.net) (Read error: Connection reset by peer)
17:46:34 hnOsmium0001 joins (uid453710@user/hnOsmium0001)
17:54:38 Sklogw joins (~Sklogw@88.232.49.3)
17:54:46 yauhsien joins (~yauhsien@61-231-23-53.dynamic-ip.hinet.net)
17:59:16 × tzh_ quits (~tzh@c-24-21-73-154.hsd1.or.comcast.net) (Remote host closed the connection)
17:59:32 tzh_ joins (~tzh@c-24-21-73-154.hsd1.wa.comcast.net)
18:00:10 <Henson> Read instances are weird
18:07:04 nate4 joins (~nate@98.45.169.16)
18:07:07 × jpds quits (~jpds@gateway/tor-sasl/jpds) (Remote host closed the connection)
18:07:39 jpds joins (~jpds@gateway/tor-sasl/jpds)
18:09:44 × hiredman quits (~hiredman@frontier1.downey.family) (Ping timeout: 246 seconds)
18:11:13 <monochrom> I can, after spending 10 hours on "the Hom functor" in category theory. Then it's "just" function composition :)
18:11:44 × Sklogw quits (~Sklogw@88.232.49.3) (Quit: Client closed)
18:12:03 <monochrom> Oh wait, misread, you said Read, not Reader.
18:12:11 × nate4 quits (~nate@98.45.169.16) (Ping timeout: 246 seconds)
18:12:32 <monochrom> But I can write a Read instance by hand too. :)
18:13:57 × ChaiTRex quits (~ChaiTRex@user/chaitrex) (Remote host closed the connection)
18:14:01 <monochrom> I think it is because I wondered how do you know when and when not to add parentheses, and readsPrec answered my question.
18:14:22 ChaiTRex joins (~ChaiTRex@user/chaitrex)
18:14:52 <monochrom> Err wait, that's Shows and showsPrec.
18:15:16 <monochrom> OK, I learned monads around the same time I learned Read, that's how.
18:16:15 moet_ joins (~moet@lib-02-subnet-194.rdns.cenic.net)
18:16:59 allbery_b joins (~geekosaur@xmonad/geekosaur)
18:16:59 × geekosaur quits (~geekosaur@xmonad/geekosaur) (Killed (NickServ (GHOST command used by allbery_b)))
18:17:02 allbery_b is now known as geekosaur
18:17:44 dextaa4 joins (~DV@user/dextaa)
18:18:08 × Vajb quits (~Vajb@85-76-45-183-nat.elisa-mobile.fi) (Ping timeout: 246 seconds)
18:18:50 × moet quits (~moet@lib-02-subnet-194.rdns.cenic.net) (Ping timeout: 246 seconds)
18:18:50 × takuan quits (~takuan@178-116-218-225.access.telenet.be) (Ping timeout: 246 seconds)
18:18:59 toluene4 joins (~toluene@user/toulene)
18:19:32 × Pickchea quits (~private@user/pickchea) (Ping timeout: 246 seconds)
18:19:32 × dextaa quits (~DV@user/dextaa) (Ping timeout: 246 seconds)
18:19:32 × jao quits (~jao@cpc103048-sgyl39-2-0-cust502.18-2.cable.virginm.net) (Ping timeout: 246 seconds)
18:19:32 × mon_aaraj quits (~MonAaraj@user/mon-aaraj/x-4416475) (Ping timeout: 246 seconds)
18:19:32 × toluene quits (~toluene@user/toulene) (Ping timeout: 246 seconds)
18:19:32 dextaa4 is now known as dextaa
18:19:33 toluene4 is now known as toluene
18:20:50 takuan joins (~takuan@178-116-218-225.access.telenet.be)
18:21:45 jao joins (~jao@cpc103048-sgyl39-2-0-cust502.18-2.cable.virginm.net)
18:21:51 mon_aaraj joins (~MonAaraj@user/mon-aaraj/x-4416475)
18:23:49 × ChaiTRex quits (~ChaiTRex@user/chaitrex) (Quit: ChaiTRex)
18:24:26 ChaiTRex joins (~ChaiTRex@user/chaitrex)
18:26:30 segfaultfizzbuzz joins (~segfaultf@192-184-223-90.static.sonic.net)
18:27:27 <segfaultfizzbuzz> so, i had a weird thought... true pure functional code (with strictly no io, entropy sources etc) can only preserve the entropy of its inputs or decrease the entropy of its inputs
18:27:42 × merijn quits (~merijn@c-001-001-027.client.esciencecenter.eduvpn.nl) (Ping timeout: 264 seconds)
18:28:14 <segfaultfizzbuzz> that is to say if i feed one megabyte worth of bits into pure functional code, i can get say, one byte out, or i can get one megabyte out, but i cannot get ten megabytes out
18:28:51 × shiraeeshi quits (~shiraeesh@46.34.206.119) (Remote host closed the connection)
18:29:06 <monochrom> You know what, true of I/O too, you just have to draw your "system boundary" larger.
18:29:09 shiraeeshi joins (~shiraeesh@46.34.206.119)
18:29:09 <segfaultfizzbuzz> the pure functional code can produce what appears to be ten megabytes from one megabyte, but it will always compress
18:29:23 <segfaultfizzbuzz> monochrom: talking to me there?
18:29:29 <monochrom> Yes.
18:29:42 <geekosaur> only on ideal hardware. on real hardware there will be entropy leaks in both directions
18:30:02 <segfaultfizzbuzz> well yes i suppose you can "freeze" the amount of entropy fed into a pure functional snippet of code after an io event
18:30:09 <monochrom> pure function + readFile cannot generate more info than input and what's already on disk.
18:30:47 toluene1 joins (~toluene@user/toulene)
18:30:50 <segfaultfizzbuzz> geekosaur: there is probably nondeterminism which leaks into real-world pure functional code (even without io etc) when it executes (runtime, operating system, etc)
18:31:26 × toluene quits (~toluene@user/toulene) (Ping timeout: 246 seconds)
18:31:26 toluene1 is now known as toluene
18:31:42 <geekosaur> and pure functional code itself can cause system load, heat output, cache effects, etc.
18:32:07 <monochrom> Oh, that.
18:32:16 <segfaultfizzbuzz> geekosaur: right, that is an entropy input leak
18:32:32 <Tuplanolla> The term for the other concept is "reversible computing".
18:33:00 <segfaultfizzbuzz> well, you can easily have nonreversible pure functions
18:33:13 <Tuplanolla> Those produce waste heat.
18:33:20 <segfaultfizzbuzz> but all true pure functions are monotonically decreasing in their entropy or at best are entropy-preserving
18:33:29 × z0k quits (~z0k@206.84.141.12) (Ping timeout: 248 seconds)
18:33:30 Pickchea joins (~private@user/pickchea)
18:34:10 <monochrom> Well, in this context we focus on input entropy and output entropy. output <= input can be true because output + waste heat >= input. We are in agreement.
18:34:47 <segfaultfizzbuzz> i don't mean physical entropy here i mean information theory entropy
18:34:59 <monochrom> Instead of idealizing hardware, I am idealizing "output".
18:35:01 <segfaultfizzbuzz> and yes i know about the physical reversibility stuff
18:36:01 <segfaultfizzbuzz> anyway i just thought this was an interesting observation
18:36:59 × eod|fserucas quits (~eod|fseru@193.65.114.89.rev.vodafone.pt) (Ping timeout: 256 seconds)
18:39:19 × eggplantade quits (~Eggplanta@108-201-191-115.lightspeed.sntcca.sbcglobal.net) (Remote host closed the connection)
18:42:16 × superbil quits (~superbil@1-34-176-171.hinet-ip.hinet.net) (Ping timeout: 265 seconds)
18:42:43 superbil joins (~superbil@1-34-176-171.hinet-ip.hinet.net)
18:47:19 progress__ joins (~fffuuuu_i@45.112.243.220)
18:52:01 mshiraeeshi joins (~shiraeesh@46.34.206.119)
18:52:06 jgeerds joins (~jgeerds@55d45f48.access.ecotel.net)
18:54:13 × shiraeeshi quits (~shiraeesh@46.34.206.119) (Ping timeout: 268 seconds)
18:56:29 × coot quits (~coot@213.134.190.95) (Quit: coot)
18:56:40 <Henson> by inputs, do you mean inputs to the function generating the outputs, or does inputs also include the content of the function doing the generating?
18:56:59 <segfaultfizzbuzz> my pure function is f and i am feeding it x, so i am calling f x
18:57:00 × zeenk quits (~zeenk@2a02:2f04:a301:3d00:39df:1c4b:8a55:48d3) (Quit: Konversation terminated!)
18:57:01 × tomgus1 quits (~tomgus1@2a02:c7e:4229:d900:dea6:32ff:fe3d:d1a3) (Quit: ZNC 1.8.2+deb2 - https://znc.in)
18:57:21 <segfaultfizzbuzz> entropy of (f x) is strictly equal to or less than that of x if f is a truely pure function
18:57:43 <Henson> a face-generation ML network as a function that takes a seed integer as an input would be able to greatly amplify the information present in the input, but all of the potential for that information generation resides in the face-generating function
18:58:34 <ski> maerwald : `readsPrec' isn't much harder than `showsPrec'
18:58:42 tomgus1 joins (~tomgus1@2a02:c7e:4229:d900:dea6:32ff:fe3d:d1a3)
18:58:53 <ski> moet_ : that breaks distributivity
18:59:00 <maerwald> ski: great... tell me how to debug "no parse"
19:00:02 × shapr quits (~user@2600:4040:2d31:7100:677f:8b5d:34bb:4aea) (Ping timeout: 255 seconds)
19:00:02 <moet_> ski: :thinkthink:
19:00:42 × machinedgod quits (~machinedg@66.244.246.252) (Ping timeout: 264 seconds)
19:01:47 ski . o O ( "1972 - Alain Colmerauer designs the logic language Prolog. His goal is to create a language with the intelligence of a two year old. He proves he has reached his goal by showing a Prolog session that says \"No.\" to every query." -- "A Brief, Incomplete, and Mostly Wrong History of Programming Languages" by James Iry in 2009-05-07 at
19:01:52 ski <https://james-iry.blogspot.com/2009/05/brief-incomplete-and-mostly-wrong.html> )
19:06:10 unit73e joins (~emanuel@2001:818:e8dd:7c00:32b5:c2ff:fe6b:5291)
19:07:21 hiredman joins (~hiredman@frontier1.downey.family)
19:08:59 eod|fserucas joins (~eod|fseru@193.65.114.89.rev.vodafone.pt)
19:09:30 × eod|fserucas quits (~eod|fseru@193.65.114.89.rev.vodafone.pt) (Remote host closed the connection)
19:12:29 eggplantade joins (~Eggplanta@2600:1700:bef1:5e10:99c9:a0a4:f69e:b22e)
19:13:02 eod|fserucas joins (~eod|fseru@193.65.114.89.rev.vodafone.pt)
19:13:34 × Pickchea quits (~private@user/pickchea) (Ping timeout: 272 seconds)
19:14:13 <segfaultfizzbuzz> Henson: face-generation? wha...?
19:15:10 <segfaultfizzbuzz> Henson: it's just the pidgeonhole principle, and the only observation here is that the pidgeonhole principle applies to pure functions/programming, which is just something i hadn't considered
19:15:47 <segfaultfizzbuzz> what is the practical significance of this observation? i suppose that you can use this kind of observation to think about the "memoizability or partial memoizability" of your code
19:16:50 coot joins (~coot@213.134.190.95)
19:17:55 × eod|fserucas quits (~eod|fseru@193.65.114.89.rev.vodafone.pt) (Remote host closed the connection)
19:18:21 × dsrt^ quits (~dsrt@50.237.44.186) (Ping timeout: 256 seconds)
19:19:31 eod|fserucas joins (~eod|fseru@193.65.114.89.rev.vodafone.pt)
19:21:58 × segfaultfizzbuzz quits (~segfaultf@192-184-223-90.static.sonic.net) (Ping timeout: 240 seconds)
19:21:59 dsrt^ joins (~dsrt@50.237.44.186)
19:25:24 segfaultfizzbuzz joins (~segfaultf@192-184-223-90.static.sonic.net)
19:28:44 × eod|fserucas quits (~eod|fseru@193.65.114.89.rev.vodafone.pt) (Remote host closed the connection)
19:29:32 sebastiandb joins (~sebastian@pool-108-31-128-56.washdc.fios.verizon.net)
19:30:00 eod|fserucas joins (~eod|fseru@193.65.114.89.rev.vodafone.pt)
19:30:30 × eod|fserucas quits (~eod|fseru@193.65.114.89.rev.vodafone.pt) (Remote host closed the connection)
19:31:55 notzmv joins (~zmv@user/notzmv)
19:33:33 eod|fserucas joins (~eod|fseru@193.65.114.89.rev.vodafone.pt)
19:33:50 × moet_ quits (~moet@lib-02-subnet-194.rdns.cenic.net) (Ping timeout: 272 seconds)
19:34:47 × eod|fserucas quits (~eod|fseru@193.65.114.89.rev.vodafone.pt) (Remote host closed the connection)
19:36:29 × segfaultfizzbuzz quits (~segfaultf@192-184-223-90.static.sonic.net) (Quit: segfaultfizzbuzz)
19:36:54 eod|fserucas joins (~eod|fseru@193.65.114.89.rev.vodafone.pt)
19:37:37 Topsi joins (~Topsi@dyndsl-095-033-088-224.ewe-ip-backbone.de)
19:40:42 <monochrom> Significance for environmentalists: Functional programming cannot be carbon-neutral. >:)
19:41:09 × yauhsien quits (~yauhsien@61-231-23-53.dynamic-ip.hinet.net) (Remote host closed the connection)
19:41:37 <monochrom> Significance for security entrepenuers: Functional programming always leaks information in side channels. >:)
19:42:28 × eod|fserucas quits (~eod|fseru@193.65.114.89.rev.vodafone.pt) (Remote host closed the connection)
19:42:45 × Midjak quits (~Midjak@82.66.147.146) (Quit: Leaving)
19:43:41 <Tisoxin> Is there a tool that shows if a binding is used strict or lazy (due to strictness analysis)?
19:43:58 <Tisoxin> I think I'd be really nice to have sth. like that in HLS
19:44:03 <unit73e> I only know practical comparisons, like this colleague was trying to program Java like Haskell or maybe Scala and I had to tell him to give it up. Putting lipstick in a horse doesn't make it prettier.
19:44:04 eod|fserucas joins (~eod|fseru@193.65.114.89.rev.vodafone.pt)
19:44:27 <unit73e> also hi
19:44:31 <monochrom> The horse is already pretty.
19:44:34 × eod|fserucas quits (~eod|fseru@193.65.114.89.rev.vodafone.pt) (Remote host closed the connection)
19:44:38 <unit73e> it does the job
19:45:31 <unit73e> anyway he was trying to make Java declarative, but verbosity of the language design ends up always looking meh
19:45:38 <monochrom> But you can try this better analogy: putting lipstick around your eyes. 8)
19:45:43 tromp joins (~textual@92-110-219-57.cable.dynamic.v4.ziggo.nl)
19:46:18 eod|fserucas joins (~eod|fseru@193.65.114.89.rev.vodafone.pt)
19:46:42 <monochrom> Tisoxin: If you know how to read the output of -ddump-stg, it has that information.
19:47:11 merijn joins (~merijn@c-001-001-027.client.esciencecenter.eduvpn.nl)
19:48:42 × eod|fserucas quits (~eod|fseru@193.65.114.89.rev.vodafone.pt) (Remote host closed the connection)
19:49:06 <monochrom> I haven't used -ddump-str-signatures, but it is advertised as "Dump strictness signatures".
19:50:25 eod|fserucas joins (~eod|fseru@193.65.114.89.rev.vodafone.pt)
19:50:54 × eod|fserucas quits (~eod|fseru@193.65.114.89.rev.vodafone.pt) (Remote host closed the connection)
19:51:39 _ht joins (~quassel@231-169-21-31.ftth.glasoperator.nl)
19:51:47 eod|fserucas joins (~eod|fseru@193.65.114.89.rev.vodafone.pt)
19:53:54 <monochrom> Looks like s/-ddump-stg/-ddump-simpl/ suffices. https://downloads.haskell.org/ghc/latest/docs/html/users_guide/hints.html#faster-producing-a-program-that-runs-quicker then scroll down to "How do I find out a function’s strictness?" for how to read the notation.
19:54:49 <monochrom> "Besides, Core syntax is fun to look at!" haha
19:54:55 <monochrom> "have fun"
19:55:18 <Franciman> re. carbon neutral, we have a monad for that!
19:55:45 <Franciman> or maybe it should be a comonad?
19:55:48 <Franciman> hmm
19:55:51 pleo joins (~pleo@user/pleo)
19:56:09 <Tisoxin> > "Besides, Core syntax is fun to look at!" haha
19:56:10 <lambdabot> error: Variable not in scope: haha
19:56:17 <Tisoxin> It's actually not that bad with inspection-testing
19:57:47 <mshiraeeshi> unit73e: borrowing some ideas from fp when coding in java could work in some cases
19:58:04 <mshiraeeshi> I think you should judge on case-by-case basis
19:59:15 × progress__ quits (~fffuuuu_i@45.112.243.220) (Quit: Leaving)
20:00:18 × alp quits (~alp@user/alp) (Remote host closed the connection)
20:00:22 <unit73e> mshiraeeshi, agreed. the issue was that the guy was trying to have small methods for everything without thinking much about why. a failed attempt to make code readable. it's hard when java has some design flaws.
20:00:33 alp joins (~alp@user/alp)
20:00:39 <unit73e> it's not a horrible language but definitely has its flaws
20:00:56 <ski> "Java Precisely" by Peter Sestoft (of Moscow ML fame) in 2016 (3rd ed.) at <https://www.itu.dk/people/sestoft/javaprecisely/> covers functional interfaces, streams, parallel streams, parallel arrays
20:00:58 <monochrom> Like this? http://www.vex.net/~trebla/humour/Nightmare.java
20:00:59 <unit73e> now perl, perl is horrible. sorry perl fans
20:03:00 <unit73e> I'd say scala is java done right, though bloated
20:04:02 <Tisoxin> It'd rather call it a better version of java, instead of java done right
20:04:05 <dolio> People have been functional programming in Java for a long time. I remember way back reading some book on the AWT, and it described this really 'novel' data structure that got used for event listeners...
20:04:19 <dolio> The novel data structure was an immutable tree. :þ
20:04:50 <mshiraeeshi> unit73e: take a Flyweight pattern from GoF book, for example
20:04:50 × eod|fserucas quits (~eod|fseru@193.65.114.89.rev.vodafone.pt) (Ping timeout: 240 seconds)
20:04:52 <dolio> What a concept.
20:04:54 <unit73e> Tisoxin, fair enough
20:04:58 <monochrom> Ah, so functional programming has been novel in Java for a long time? :)
20:05:22 <mshiraeeshi> "Use sharing to support large numbers of fine-grained objects efficiently"
20:05:47 <mshiraeeshi> that's how String in Java works and that's the reason String is immutable in Java, btw
20:06:00 <monochrom> I had been saying "I'll finish my thesis in a year" for many years, too.
20:07:03 × Techcable quits (~Techcable@user/Techcable) (Remote host closed the connection)
20:07:11 Techcable joins (~Techcable@user/Techcable)
20:08:59 <unit73e> I finished my thesis... but I'm one of those guys that starts projects and doesn't finish lol
20:09:25 <unit73e> my xp3 extract thingy is working
20:09:39 yauhsien joins (~yauhsien@61-231-38-201.dynamic-ip.hinet.net)
20:10:55 <monochrom> We turn induction into co-induction
20:11:52 eod|fserucas joins (~eod|fseru@193.65.114.89.rev.vodafone.pt)
20:11:52 × eod|fserucas quits (~eod|fseru@193.65.114.89.rev.vodafone.pt) (Remote host closed the connection)
20:14:15 <Henson> has anybody here had experience with the StackBuilders or Serokell consulting companies for Haskell development? If so, would you be interested in telling me about it?
20:14:17 × yauhsien quits (~yauhsien@61-231-38-201.dynamic-ip.hinet.net) (Ping timeout: 248 seconds)
20:14:40 eod|fserucas joins (~eod|fseru@193.65.114.89.rev.vodafone.pt)
20:15:04 × eod|fserucas quits (~eod|fseru@193.65.114.89.rev.vodafone.pt) (Remote host closed the connection)
20:16:53 <mshiraeeshi> although I'm not sure we can say that the idea of "making things immutable to enable sharing of large numbers of objects" originated from fp
20:17:38 eod|fserucas joins (~eod|fseru@193.65.114.89.rev.vodafone.pt)
20:17:42 <mshiraeeshi> perhaps it's more like a convergence, when the same idea gets invented in several paradigms independently
20:18:07 × eod|fserucas quits (~eod|fseru@193.65.114.89.rev.vodafone.pt) (Remote host closed the connection)
20:19:57 × bitdex quits (~bitdex@gateway/tor-sasl/bitdex) (Ping timeout: 268 seconds)
20:20:18 Pickchea joins (~private@user/pickchea)
20:20:37 eod|fserucas joins (~eod|fseru@193.65.114.89.rev.vodafone.pt)
20:21:01 × eod|fserucas quits (~eod|fseru@193.65.114.89.rev.vodafone.pt) (Remote host closed the connection)
20:21:15 × merijn quits (~merijn@c-001-001-027.client.esciencecenter.eduvpn.nl) (Ping timeout: 256 seconds)
20:22:17 × jrm quits (~jrm@user/jrm) (Ping timeout: 248 seconds)
20:23:07 eod|fserucas joins (~eod|fseru@193.65.114.89.rev.vodafone.pt)
20:23:41 × heath quits (~heath@user/heath) (Quit: WeeChat 1.7)
20:24:08 heath joins (~heath@user/heath)
20:24:18 <EvanR> pure functional programming predates java at least
20:24:36 jrm joins (~jrm@user/jrm)
20:24:48 <mshiraeeshi> it's like asking "if two melodies are similar, does it mean than one plagiarized the other?"
20:24:52 × tromp quits (~textual@92-110-219-57.cable.dynamic.v4.ziggo.nl) (Quit: My iMac has gone to sleep. ZZZzzz…)
20:24:54 × jargon quits (~jargon@184.101.186.108) (Remote host closed the connection)
20:25:27 <EvanR> what are we railing against? If stealing ideas from other languages is technically stealing? xD
20:26:17 <mshiraeeshi> whether applying fp techniques in Java is a good or bad idea
20:26:17 <monochrom> In the case of music, plagiarism can happen and can be bad.
20:26:30 <EvanR> immutable data can be good anywhere
20:26:40 <monochrom> This does not apply to cross pollination in programming.
20:27:02 acidjnk joins (~acidjnk@dynamic-046-114-168-206.46.114.pool.telefonica.de)
20:27:06 × sebastiandb quits (~sebastian@pool-108-31-128-56.washdc.fios.verizon.net) (Ping timeout: 264 seconds)
20:27:32 <monochrom> But to answer the question of what we are railing against:
20:27:35 Vajb joins (~Vajb@hag-jnsbng11-58c3a8-176.dhcp.inet.fi)
20:27:54 <EvanR> on fp in java, if monochrom hasn't posted their lazy list implemented in java via exceptions yet, maybe they should. As an example of "bad" xD
20:27:55 <monochrom> Everyone is railing against someone else having an ever so slightly different opinion, obviously.
20:28:28 <monochrom> But I think I posted it already!
20:28:31 <dolio> Yeah, using exceptions for that definitely sounds bad.
20:28:49 <EvanR> figured
20:29:03 <monochrom> Oh, post and state that it's bad.
20:29:15 <EvanR> no it stands for itself
20:29:44 <monochrom> Is having "humour" in the URL close enough? :)
20:29:57 <monochrom> and using the filename "Nightmare"
20:30:05 × eod|fserucas quits (~eod|fseru@193.65.114.89.rev.vodafone.pt) (Remote host closed the connection)
20:30:41 <EvanR> does java have an "acme" section
20:31:22 <monochrom> I think the number or percentage of people who use my Nightmare.java technique is about the same as that of using that "type-level interview solution" technique.
20:31:49 <monochrom> Java doesn't have a community central repo, does it?
20:31:55 bitdex joins (~bitdex@gateway/tor-sasl/bitdex)
20:32:01 <dolio> Maven?
20:32:18 <monochrom> Ah OK my bad.
20:32:22 <Henson> monochrom: where can I find your Nightmare.java technique?
20:32:35 odnes joins (~odnes@5-203-187-167.pat.nym.cosmote.net)
20:32:39 <monochrom> http://www.vex.net/~trebla/humour/Nightmare.java
20:32:40 eod|fserucas joins (~eod|fseru@193.65.114.89.rev.vodafone.pt)
20:34:23 × odnes quits (~odnes@5-203-187-167.pat.nym.cosmote.net) (Remote host closed the connection)
20:34:33 × ft quits (~ft@shell.chaostreff-dortmund.de) (Ping timeout: 248 seconds)
20:35:49 ft joins (~ft@shell.chaostreff-dortmund.de)
20:38:07 × lyle quits (~lyle@104.246.145.85) (Quit: WeeChat 3.5)
20:39:31 × _ht quits (~quassel@231-169-21-31.ftth.glasoperator.nl) (Remote host closed the connection)
20:39:52 <Tuplanolla> Wait, why exceptions?
20:40:16 odnes joins (~odnes@5-203-187-167.pat.nym.cosmote.net)
20:40:36 <geekosaur> just to prove it could be done?
20:40:46 <EvanR> Because It's There
20:41:09 slack1256 joins (~slack1256@191.125.99.212)
20:42:07 <monochrom> Because it is closest to pattern matching. :)
20:42:11 × eggplantade quits (~Eggplanta@2600:1700:bef1:5e10:99c9:a0a4:f69e:b22e) (Remote host closed the connection)
20:43:02 <slack1256> It there an option so the `-xc` dumps a json instead of the usual format? Google cloud run doesn't deal with raw text well.
20:43:32 tromp joins (~textual@92-110-219-57.cable.dynamic.v4.ziggo.nl)
20:43:56 <monochrom> And some SML users use their exception system for OOP because it's the only subtyping system. :)
20:44:36 <Tuplanolla> Pleasant disgust.
20:44:48 <EvanR> one of the next 700 languages needs to give up and just make exceptions the only thing
20:45:09 <EvanR> "everything's an exception"
20:45:16 odnes_ joins (~odnes@5-203-187-167.pat.nym.cosmote.net)
20:45:19 <EvanR> (ironically)
20:45:23 × odnes quits (~odnes@5-203-187-167.pat.nym.cosmote.net) (Read error: Connection reset by peer)
20:45:26 nate4 joins (~nate@98.45.169.16)
20:45:39 × eod|fserucas quits (~eod|fseru@193.65.114.89.rev.vodafone.pt) (Remote host closed the connection)
20:45:42 <monochrom> Next April 1st, someone might claim "I found an unpublished secret paper of Guy Steeles, it's called Exception: The Ultimate Lambda" >:)
20:45:45 <geekosaur> pretty soon everything will have to produce json output and compile to or simply bve javascript, the only language
20:46:19 <slack1256> Well, maps are useful :>
20:46:27 × mon_aaraj quits (~MonAaraj@user/mon-aaraj/x-4416475) (Ping timeout: 268 seconds)
20:46:45 eod|fserucas joins (~eod|fseru@193.65.114.89.rev.vodafone.pt)
20:46:48 <EvanR> google can't convert literally anything to json yet?
20:47:16 × eod|fserucas quits (~eod|fseru@193.65.114.89.rev.vodafone.pt) (Remote host closed the connection)
20:47:17 <slack1256> Not -xc output atleast. It record a new json per line. And my stack traces are looooong.
20:47:29 <slack1256> s_record_records_
20:47:47 <mshiraeeshi> lol, and "main" jumpstarts everything with a single "throw" statement
20:48:14 mon_aaraj joins (~MonAaraj@user/mon-aaraj/x-4416475)
20:48:25 <monochrom> Isn't it a beauty.
20:48:33 eod|fserucas joins (~eod|fseru@193.65.114.89.rev.vodafone.pt)
20:48:53 <mshiraeeshi> it's interesting though how would SML users model exceptions in OOP
20:49:01 <mshiraeeshi> exceptions on top of exceptions
20:49:02 × eod|fserucas quits (~eod|fseru@193.65.114.89.rev.vodafone.pt) (Remote host closed the connection)
20:49:38 <mshiraeeshi> or just exceptions would suffice? not sure
20:50:33 <ski> `exn' is an "open sum type". you can even generate new exception constructors at run-time (e.g. every time a function is called)
20:50:58 eggplantade joins (~Eggplanta@2600:1700:bef1:5e10:99c9:a0a4:f69e:b22e)
20:51:37 <monochrom> Yeah they don't use throw like I do. They just enjoy exn being a parent type.
20:52:32 <monochrom> IOW they declare a lot of "exceptions" that are never throw but just used as normal parameters and normal return values.
20:52:43 <monochrom> s/throw/thrown/
20:53:09 <monochrom> Oh haha do you mind if I also call my technique "the game of throws"
20:53:18 × takuan quits (~takuan@178-116-218-225.access.telenet.be) (Remote host closed the connection)
20:54:19 eod|fserucas joins (~eod|fseru@193.65.114.89.rev.vodafone.pt)
20:55:12 × eod|fserucas quits (~eod|fseru@193.65.114.89.rev.vodafone.pt) (Remote host closed the connection)
20:55:49 × mon_aaraj quits (~MonAaraj@user/mon-aaraj/x-4416475) (Ping timeout: 256 seconds)
20:55:54 × dsrt^ quits (~dsrt@50.237.44.186) (Ping timeout: 264 seconds)
20:56:29 eod|fserucas joins (~eod|fseru@193.65.114.89.rev.vodafone.pt)
20:56:58 × eod|fserucas quits (~eod|fseru@193.65.114.89.rev.vodafone.pt) (Remote host closed the connection)
20:57:33 mon_aaraj joins (~MonAaraj@user/mon-aaraj/x-4416475)
20:57:36 eod|fserucas joins (~eod|fseru@193.65.114.89.rev.vodafone.pt)
20:59:30 × jgeerds quits (~jgeerds@55d45f48.access.ecotel.net) (Ping timeout: 240 seconds)
21:01:25 × eod|fserucas quits (~eod|fseru@193.65.114.89.rev.vodafone.pt) (Remote host closed the connection)
21:02:48 eod|fserucas joins (~eod|fseru@193.65.114.89.rev.vodafone.pt)
21:05:47 <alexfmpe[m]> is there a way to write the equivalent of... (full message at https://libera.ems.host/_matrix/media/r0/download/libera.chat/675e0345137e20b45e428da1305e5860386d98c8)
21:06:10 <alexfmpe[m]> since `D` can't figure out the `y` even though there's a functional dependency
21:06:42 haskellapprenti joins (~haskellap@204.14.236.211)
21:07:09 <haskellapprenti> pl \a b -> if f a b == GT then a else b
21:07:19 <haskellapprenti> @pl \a b -> if f a b == GT then a else b
21:07:19 <lambdabot> join . (flip =<< (if' .) . flip flip GT . ((==) .) . f)
21:07:54 × unit73e quits (~emanuel@2001:818:e8dd:7c00:32b5:c2ff:fe6b:5291) (Ping timeout: 264 seconds)
21:08:14 <slack1256> alexfmpe[m]: Maybe an explicit forall?
21:08:40 × bitdex quits (~bitdex@gateway/tor-sasl/bitdex) (Ping timeout: 268 seconds)
21:08:54 <alexfmpe[m]> er, where
21:10:01 <slack1256> class definition of D.
21:10:28 <slack1256> alexfmpe[m]: https://gitlab.haskell.org/ghc/ghc/-/wikis/quantified-constraints
21:10:43 <slack1256> Seems what you want. But you will use the most basic example.
21:10:53 bitdex joins (~bitdex@gateway/tor-sasl/bitdex)
21:11:11 <alexfmpe[m]> I mean sure, `class (forall y. C x y) => D x` compiles
21:11:15 <alexfmpe[m]> but that means a different thing
21:11:38 <alexfmpe[m]> that forces `C x` to work for all `y`
21:12:00 <alexfmpe[m]> meanwhile there's a fundep saying `x -> y`
21:12:09 <alexfmpe[m]> not sure why this even compiles actually
21:12:19 <slack1256> Who cares about y on that instance actually?
21:12:34 <alexfmpe[m]> I guess an unsatisfiable constraint is still allowed, just unusable
21:12:53 <slack1256> I mean what method
21:13:16 × alp quits (~alp@user/alp) (Ping timeout: 272 seconds)
21:13:30 × pleo quits (~pleo@user/pleo) (Quit: quit)
21:13:51 <alexfmpe[m]> well these classes are method-less, I ran into this on a more complex scenario
21:13:55 pleo joins (~pleo@user/pleo)
21:13:56 <alexfmpe[m]> basically `D` adds an identity
21:14:35 <alexfmpe[m]> or rather, `D` reproduces the issue I had on the class where I wanted to add an identity
21:16:18 <slack1256> Any particular reason why you don't want to use Type Families?
21:16:34 <alexfmpe[m]> indeed, trying the quantifiable thing leads to a deadend
21:16:34 <alexfmpe[m]> `instance D Bool` -> No instance for (C Bool y)
21:16:34 <alexfmpe[m]> `instance C y` -> The coverage condition fails in class ‘C’ for functional dependency: ‘x -> y’
21:16:38 × eod|fserucas quits (~eod|fseru@193.65.114.89.rev.vodafone.pt) (Remote host closed the connection)
21:16:46 <slack1256> AFAIK functional dependencies where more of a hack to make type inference work in presence of MTL like effects.
21:16:49 <alexfmpe[m]> nah, TF are fine, I was just confirming whether they're necessary
21:17:30 × werneta quits (~werneta@70-142-214-115.lightspeed.irvnca.sbcglobal.net) (Quit: Lost terminal)
21:17:46 <ski> (yes, you'd rather want `exists', than `forall')
21:18:32 × azimut quits (~azimut@gateway/tor-sasl/azimut) (Ping timeout: 268 seconds)
21:21:58 werneta joins (~werneta@70-142-214-115.lightspeed.irvnca.sbcglobal.net)
21:22:42 × odnes_ quits (~odnes@5-203-187-167.pat.nym.cosmote.net) (Remote host closed the connection)
21:24:02 × BusConscious quits (~martin@ip5f5bdf00.dynamic.kabel-deutschland.de) (Quit: Lost terminal)
21:24:04 × slack1256 quits (~slack1256@191.125.99.212) (Read error: Connection reset by peer)
21:24:48 slack1256 joins (~slack1256@186.11.82.163)
21:26:25 sibnull[m] joins (~sibnullma@2001:470:69fc:105::1:1291)
21:26:33 × bitdex quits (~bitdex@gateway/tor-sasl/bitdex) (Ping timeout: 268 seconds)
21:27:42 jgeerds joins (~jgeerds@55d45f48.access.ecotel.net)
21:31:17 bitdex joins (~bitdex@gateway/tor-sasl/bitdex)
21:32:28 × eggplantade quits (~Eggplanta@2600:1700:bef1:5e10:99c9:a0a4:f69e:b22e) (Remote host closed the connection)
21:33:38 × tromp quits (~textual@92-110-219-57.cable.dynamic.v4.ziggo.nl) (Quit: My iMac has gone to sleep. ZZZzzz…)
21:34:39 zebrag joins (~chris@user/zebrag)
21:39:10 eggplantade joins (~Eggplanta@2600:1700:bef1:5e10:99c9:a0a4:f69e:b22e)
21:44:14 dsrt^ joins (~dsrt@50.237.44.186)
21:47:06 tromp joins (~textual@92-110-219-57.cable.dynamic.v4.ziggo.nl)
21:48:33 merijn joins (~merijn@c-001-001-027.client.esciencecenter.eduvpn.nl)
21:49:05 × nate4 quits (~nate@98.45.169.16) (Ping timeout: 256 seconds)
21:50:54 × eggplantade quits (~Eggplanta@2600:1700:bef1:5e10:99c9:a0a4:f69e:b22e) (Remote host closed the connection)
21:54:29 slac65789 joins (~slack1256@191.126.99.212)
21:55:00 × coot quits (~coot@213.134.190.95) (Quit: coot)
21:56:45 × slack1256 quits (~slack1256@186.11.82.163) (Ping timeout: 268 seconds)
21:57:07 machinedgod joins (~machinedg@66.244.246.252)
21:58:24 eggplantade joins (~Eggplanta@2600:1700:bef1:5e10:99c9:a0a4:f69e:b22e)
21:58:50 slac65789 is now known as slack1256
22:01:10 × Pickchea quits (~private@user/pickchea) (Quit: Leaving)
22:08:12 × Sciencentistguy quits (~sciencent@hacksoc/ordinary-member) (Quit: o/)
22:10:13 Sciencentistguy joins (~sciencent@hacksoc/ordinary-member)
22:13:51 × haskellapprenti quits (~haskellap@204.14.236.211) (Quit: Client closed)
22:16:41 shapr joins (~user@2600:4040:2d31:7100:8a04:a12a:c6c2:2309)
22:16:53 × slack1256 quits (~slack1256@191.126.99.212) (Remote host closed the connection)
22:19:45 × tromp quits (~textual@92-110-219-57.cable.dynamic.v4.ziggo.nl) (Quit: My iMac has gone to sleep. ZZZzzz…)
22:20:15 × michalz quits (~michalz@185.246.204.97) (Remote host closed the connection)
22:22:54 × merijn quits (~merijn@c-001-001-027.client.esciencecenter.eduvpn.nl) (Ping timeout: 264 seconds)
22:30:18 × sander quits (~sander@user/sander) (Ping timeout: 276 seconds)
22:30:18 yauhsien joins (~yauhsien@61-231-38-201.dynamic-ip.hinet.net)
22:33:59 × gmg quits (~user@user/gehmehgeh) (Quit: Leaving)
22:34:33 × yauhsien quits (~yauhsien@61-231-38-201.dynamic-ip.hinet.net) (Ping timeout: 248 seconds)
22:34:36 sander joins (~sander@user/sander)
22:36:49 × pleo quits (~pleo@user/pleo) (Quit: quit)
22:36:52 × acidjnk quits (~acidjnk@dynamic-046-114-168-206.46.114.pool.telefonica.de) (Ping timeout: 272 seconds)
22:37:18 × mshiraeeshi quits (~shiraeesh@46.34.206.119) (Ping timeout: 240 seconds)
22:38:45 × jgeerds quits (~jgeerds@55d45f48.access.ecotel.net) (Ping timeout: 276 seconds)
22:39:51 unit73e joins (~emanuel@2001:818:e8dd:7c00:32b5:c2ff:fe6b:5291)
22:42:02 Guest4431 joins (~Guest44@2a01cb0589202e009cd3bf02702e3314.ipv6.abo.wanadoo.fr)
22:43:32 <Guest4431> Hello here. I'm considering buying Real World Haskell book. But it has been edited in 2007. Im wondering, do the free, web version is more up to date than the paper version ?
22:45:24 <geekosaur> they're the same, but the online one has an associated wiki with updates
22:45:50 <hpc> http://book.realworldhaskell.org/read/installing-ghc-and-haskell-libraries.html mentions "late 2008" and has XP screenshots
22:45:54 <hpc> so probably not?
22:47:20 × bgamari quits (~bgamari@2001:470:e438::1) (Quit: ZNC 1.8.2 - https://znc.in)
22:51:33 bgamari joins (~bgamari@64.223.226.161)
22:54:58 jmdaemon joins (~jmdaemon@user/jmdaemon)
22:56:24 × even4void quits (even4void@came.here.for-some.fun) (Quit: fBNC - https://bnc4free.com)
22:57:22 × andreas303 quits (andreas303@ip227.orange.bnc4free.com) (Quit: fBNC - https://bnc4free.com)
22:58:05 <unit73e> huh, so I was going to implement my animation SDL thingy but turns out someone already made it
22:58:33 <unit73e> just goes to show I should search hackage more
22:58:55 × catern quits (~sbaugh@2604:2000:8fc0:b:a9c7:866a:bf36:3407) (Ping timeout: 258 seconds)
22:59:21 × xacktm quits (xacktm@user/xacktm) (Quit: fBNC - https://bnc4free.com)
22:59:38 × chomwitt quits (~chomwitt@2a02:587:dc0d:e600:4907:a32:4c72:2e8c) (Ping timeout: 240 seconds)
23:01:08 jmcarthur joins (~jmcarthur@c-73-29-224-10.hsd1.nj.comcast.net)
23:01:34 <unit73e> I do have a stupid question. let's say a package is unmaintained. now what? can someone else pick it or it's dead?
23:01:44 alp joins (~alp@user/alp)
23:03:34 <hpc> the hackage admins can transfer ownership, there's a policy documented somewhere on how, iirc it's contact the maintainer and get 3 months of silence?
23:05:05 <unit73e> ok thanks. because as I see it in haskell the naming scheme is very simple. it's just the name of the package.
23:06:51 × Feuermagier quits (~Feuermagi@user/feuermagier) (Remote host closed the connection)
23:08:56 × jmcarthur quits (~jmcarthur@c-73-29-224-10.hsd1.nj.comcast.net) (Quit: My MacBook Air has gone to sleep. ZZZzzz…)
23:10:32 even4void joins (even4void@came.here.for-some.fun)
23:13:07 × stackdroid18 quits (14094@user/stackdroid) (Quit: hasta la vista... tchau!)
23:13:58 × bontaq quits (~user@ool-45779fe5.dyn.optonline.net) (Ping timeout: 240 seconds)
23:14:45 andreas303 joins (andreas303@ip227.orange.bnc4free.com)
23:17:33 moet joins (~moet@mobile-166-171-249-250.mycingular.net)
23:19:53 × sander quits (~sander@user/sander) (Ping timeout: 248 seconds)
23:20:00 × mon_aaraj quits (~MonAaraj@user/mon-aaraj/x-4416475) (Ping timeout: 268 seconds)
23:21:26 mon_aaraj joins (~MonAaraj@user/mon-aaraj/x-4416475)
23:23:52 sander joins (~sander@user/sander)
23:24:05 Lord_of_Life_ joins (~Lord@user/lord-of-life/x-2819915)
23:24:06 × Lord_of_Life quits (~Lord@user/lord-of-life/x-2819915) (Ping timeout: 264 seconds)
23:25:18 × dsrt^ quits (~dsrt@50.237.44.186) (Ping timeout: 264 seconds)
23:25:20 Lord_of_Life_ is now known as Lord_of_Life
23:25:31 xacktm joins (xacktm@user/xacktm)
23:25:39 <jackdk> hpc, unit73e: https://wiki.haskell.org/Taking_over_a_package is the policy
23:25:58 <jackdk> Although the wiki seems to be struggling right now?
23:26:24 × Topsi quits (~Topsi@dyndsl-095-033-088-224.ewe-ip-backbone.de) (Read error: Connection reset by peer)
23:26:37 <unit73e> jackdk, I was able to read it. looks straightfoward.
23:26:49 <unit73e> thanks
23:26:55 <jackdk> (Yeah, wiki's fine. My browser was chucking a tanty)
23:31:43 × xff0x quits (~xff0x@b133147.ppp.asahi-net.or.jp) (Ping timeout: 268 seconds)
23:34:52 <hpc> is chucking a tanty when it goes right wazzock, or just a wee bit barmy? :P
23:35:55 gurkenglas joins (~gurkengla@dslb-002-207-014-022.002.207.pools.vodafone-ip.de)
23:40:37 <raehik> does anyone know if we have a Google Cloud lib for Haskell, like AWS/amazonka? I looked around but couldn't find one
23:40:58 <raehik> that had much reach anyway
23:42:56 xff0x joins (~xff0x@2405:6580:b080:900:6f91:670e:f20c:cfe)
23:43:26 × ubert quits (~Thunderbi@p200300ecdf0da521adf2b2fea6746db1.dip0.t-ipconnect.de) (Ping timeout: 268 seconds)
23:43:27 <hpc> as it so happens, there's one literally called amazonka
23:44:01 <hpc> it's got a zillion subpackages, so at the very least it's matching the sprawling nature of aws itself :D
23:44:14 <geekosaur> I think that's what they were referring to
23:44:30 <geekosaur> but I don't know of a google cloud equivalent
23:45:06 × Guest4431 quits (~Guest44@2a01cb0589202e009cd3bf02702e3314.ipv6.abo.wanadoo.fr) (Quit: Client closed)
23:45:38 × unit73e quits (~emanuel@2001:818:e8dd:7c00:32b5:c2ff:fe6b:5291) (Ping timeout: 240 seconds)
23:45:54 nate4 joins (~nate@98.45.169.16)
23:45:57 merijn joins (~merijn@c-001-001-027.client.esciencecenter.eduvpn.nl)
23:46:01 <hpc> oh wow, i can't read
23:48:48 mvk joins (~mvk@2607:fea8:5ce3:8500::4588)
23:50:30 × nate4 quits (~nate@98.45.169.16) (Ping timeout: 264 seconds)
23:50:38 <raehik> gotcha then no one will mind if I push a tiny package handling only what I've needed

All times are in UTC on 2022-06-23.