haskell - Unexpectedly low throughput for Network I/O using Scotty -


i tried benchmark scotty test network i/o efficiency , overall throughput.

for set 2 local servers written in haskell. 1 doesn't , acts api.

code same is

{-# language overloadedstrings #-}   import web.scotty  import network.wai.middleware.requestlogger   import control.monad import data.text import control.monad.trans import data.bytestring import network.http.types (status302) import data.time.clock import data.text.lazy.encoding (decodeutf8) import control.concurrent import network.http.conduit import network.connection (tlssettings (..)) import network.http.client import network main =    scotty 4001 $     middleware logstdoutdev     "/dummy_api" $         text $ "dummy response" 

i wrote server calls server , returns response.

{-# language overloadedstrings #-}   import web.scotty  import network.wai.middleware.requestlogger   import control.monad import control.monad.trans import qualified data.text.internal.lazy lt import data.bytestring import network.http.types (status302) import data.time.clock import data.text.lazy.encoding (decodeutf8) import control.concurrent import qualified data.bytestring.lazy lb import network.http.conduit import network.connection (tlssettings (..)) import network.http.client import network   main =    let man = newmanager defaultmanagersettings    scotty 3000 $     middleware logstdoutdev      "/filters" $         response <- liftio $! (testget man)         json $ decodeutf8 (lb.fromchunks response)  testget :: io manager -> io [b.bytestring] testget manager =     request <- parseurl "http://localhost:4001/dummy_api"     man <- manager     let req = request { method = "get", responsetimeout = nothing, redirectcount = 0}     <- withresponse req man $ brconsume . responsebody     return $! 

with both these servers running, performed wrk benchmarking , got extremely high throughput.

wrk -t30 -c100 -d60s "http://localhost:3000/filters" running 1m test @ http://localhost:3000/filters   30 threads , 100 connections   thread stats   avg      stdev     max   +/- stdev     latency    30.86ms   78.40ms   1.14s    95.63%     req/sec   174.05     62.29     1.18k    76.20%   287047 requests in 1.00m, 91.61mb read   socket errors: connect 0, read 0, write 0, timeout 118   non-2xx or 3xx responses: 284752 requests/sec:   4776.57 transfer/sec:      1.52mb 

while higher other web servers phoenix, realized meant nothing majority of responses 500 errors occuring due file descriptor exhaustion.

i check limits pretty low.

ulimit -n 256 

i increased these limits to

ulimit -n 10240 

i ran wrk again , time enough throughput had been reduced drastically.

wrk -t30 -c100 -d60s "http://localhost:3000/filters" running 1m test @ http://localhost:3000/filters   30 threads , 100 connections   thread stats   avg      stdev     max   +/- stdev     latency   105.69ms  161.72ms   1.24s    96.27%     req/sec    19.88     16.62   120.00     58.12%   8207 requests in 1.00m, 1.42mb read   socket errors: connect 0, read 0, write 0, timeout 1961   non-2xx or 3xx responses: 1521 requests/sec:    136.60 transfer/sec:     24.24kb 

although amount of 500 errors had reduced, not eliminated. benchmarked gin , phoenix , way better scotty while not giving 500 responses.

what piece of puzzle missing? suspect there issue i'm failing debug.

i understand http-conduit has lot these errors , http-client library uses under hood , has nothing scotty.

@yuras's analogy correct. on running server again, issues related non 2xx status code gone.

the first line in main block culprit. changed line from

main =    let man = newmanager defaultmanagersettings 

to

main =    man <- newmanager defaultmanagersettings 

and voila, there weren't issues. high memory usage of program stabilized 21mb 1gb earlier.

i don't know reason though. nice have explanation this.


Comments