Domov Poslovno Kako lahko analitika izboljša poslovanje? - prepis iz 2. epizode

Kako lahko analitika izboljša poslovanje? - prepis iz 2. epizode

Anonim

Opomba urednika: To je prepis ene od naših preteklih spletnih oddaj. Naslednja epizoda prihaja hitro, kliknite tukaj in se registrirajte.


Eric Kavanagh: Dame in gospodje, pozdravljeni in dobrodošli še enkrat k 2. epizodi TechWise. Da, res je, čas je, da pridobite modre ljudi! Danes imam na voljo kup res pametnih ljudi, ki nam bodo pomagali pri tem prizadevanju. Moje ime je seveda Eric Kavanagh. Jaz bom vaš gostitelj, vaš moderator, za to zasedanje ob strelih. Tu imamo veliko vsebine, ljudje. V poslu imamo nekaj velikih imen, ki so bili analitiki v našem prostoru in štiri najbolj zanimive prodajalce. Danes bomo imeli veliko dobrih ukrepov na razpisu. In seveda, tam v publiki igrate pomembno vlogo pri postavljanju vprašanj.


Torej še enkrat, oddaja je TechWise in današnja tema je "Kako lahko analitika izboljša poslovanje?" Očitno je, da je vroča tema, v kateri boste poskušali razumeti različne analitike, ki jih lahko naredite, in kako lahko to izboljšate, ker je to konec koncev.


Tako se lahko vidite na vrhu, to je resnično vaše. Dr Kirk Borne, dober prijatelj z univerze George Mason. Je znanstvenik s podatki, ki ima ogromno izkušenj, zelo globoko strokovno znanje na tem področju in pridobivanje podatkov ter velikih podatkov in vse te vrste zabave. In, seveda, imamo svojega dr. Robina Bloorja, glavnega analitika v skupini Bloor. Kdo je treniral kot aktuar pred mnogimi, mnogimi leti. In zadnje pol desetletje je bil resnično osredotočen na ves ta velik prostor podatkov in analitični prostor. Minilo je že pet let, odkar smo ustanovili skupino Bloor. Torej čas leti, ko se zabavate.


Zaslišali se bomo tudi z Willom Gormanom, glavnim arhitektom Pentaha; Steve Wilkes, CCO za spletno delovanje; Frank Sanders, tehnični direktor pri MarkLogic; in Hannah Smalltree, direktorica Treasure Data. Tako kot sem že rekel, to je veliko vsebine.


Kako lahko analitika pomaga pri vašem podjetju? No, kako ne morem odkrito pomagati vašemu podjetju? Obstajajo vse vrste načinov, kako analitiko uporabiti za izboljšanje organizacije.


Torej racionalizirati operacije. To je tisto, o katerem ne slišite toliko kot pri stvareh, kot so trženje ali zvišanje prihodkov ali celo prepoznavanje priložnosti. Toda racionalizacija poslovanja je to resnično zelo močna stvar, ki jo lahko naredite za svojo organizacijo, saj lahko določite kraje, kjer lahko na primer nekaj oddajate ali dodate podatke v določen postopek. In to lahko poenostavi tako, da od uporabnika ne zahteva, da pokliče telefon ali nekoga po e-pošti. Obstaja toliko različnih načinov, da lahko racionalizirate svoje delovanje. In vse to resnično pomaga znižati vaše stroške, kajne? To je ključno, saj znižuje stroške. Omogoča pa tudi boljšo postrežbo strank.


In če razmišljate o tem, kako so postali nestrpni ljudje, in to vidim vsak dan v smislu, kako ljudje komunicirajo na spletu, tudi z našimi oddajami, ponudniki storitev, ki jih uporabljamo. Potrpežljivost, ki jo imajo ljudje, razpon pozornosti, se iz dneva v dan krajša in krajša. In to pomeni, da se morate kot organizacija odzivati ​​v hitrejših in hitrejših obdobjih, da boste lahko zadovoljili svoje stranke.


Na primer, če je nekdo na vaši spletni strani ali brska poizkusi, da bi kaj našel, če se zmede in odide, no, morda ste pravkar izgubili stranko. In odvisno od tega, koliko plačate za svoj izdelek ali storitev, in morda je to velika stvar. V bistvu je torej, da so poenostavljene operacije, mislim, eden najbolj vročih prostorov za uporabo analitike. In to storite tako, da pogledate številke, zdrobite podatke in ugotovite, na primer: "Hej, zakaj na tej strani našega spletnega mesta izgubljamo toliko ljudi?" "Zakaj imamo nekaj teh telefonskih klicev trenutno?"


In bolj ko se boste lahko na takšne stvari odzvali, boljše bodo možnosti, da se boste znašli nad situacijo in naredili nekaj v zvezi s tem, preden bo prepozno. Ker je tisto obdobje časa, ko se nekdo zaradi nečesa vznemirja, je nezadovoljen ali poskuša kaj najti, vendar je frustriran; tam imate priložnost, da se obrnete do njih, jih pograbite in sodelujete s to stranko. In če to storite na pravilen način s pravimi podatki ali lepo sliko stranke - razumevanje, kdo je ta kupec, kakšna je njihova dobičkonosnost, kakšne so njihove želje - če se boste za to resnično lotili, boste to storili odlična naloga, da zadržite svoje stranke in pridobite nove stranke. In o tem gre.


Torej s tem ga bom v resnici izročil Kirku Borneu, enemu od naših danes znanstvenih podatkov. In danes so precej redki, ljudje. Dva jih imamo vsaj na klicu, tako da je to veliko. S tem, Kirk, ti ​​bom predal, da bom spregovoril o analitiki in kako pomaga pri poslovanju. Pojdi.


Dr Kirk Borne: Pa najlepša hvala, Eric. Me slišiš?


Eric: V redu je, pojdi naprej.


Dr Kirk: V redu, dobro. Želim si le deliti, če govorim pet minut, in ljudje mahajo z mano. Torej, uvodne pripombe, Eric, da si se resnično navezal na to temo, na kratko bom spregovoril v naslednjih minutah, kakšna je ta uporaba velikih podatkov in analitike za podatke za odločitve v podporo. Pripomba, ki ste jo podali glede racionalizacije operacij, se mi nekako uvršča v ta koncept operativne analitike, v katerem lahko skoraj v vsaki aplikaciji po svetu vidite, ali gre za znanstveno aplikacijo, poslovanje, kibernetsko varnost in ukrepe kazenskega pregona ter vlada, zdravstvo. Ne glede na število krajev, kjer imamo tok podatkov in sprejemamo nekakšen odziv ali odločitev kot reakcijo na dogodke in opozorila in vedenja, ki jih vidimo v tem toku podatkov.


In tako je ena izmed stvari, o kateri bi rad govoril danes, nekako, kako črpate znanje in vpogled iz velikih podatkov, da bi prišli do tiste točke, ko se lahko dejansko odločamo za ukrepanje. In pogosto govorimo o tem v kontekstu avtomatizacije. In danes želim avtomatiko zliti s človeškim analitikom v zanki. S tem mislim, medtem ko ima poslovni analitik pomembno vlogo pri stavah, kvalifikacijah, potrjevanju določenih dejanj ali pravil strojnega učenja, ki jih izvlečemo iz podatkov. Toda če pridemo do točke, ko smo prepričani v poslovna pravila, ki smo jih izluščili, in mehanizmi za opozarjanje na nas veljavni, potem lahko to precej spremenimo v avtomatiziran postopek. Pravzaprav izvajamo tisto operativno racionalizacijo, o kateri je govoril Eric.


Torej se malo igram besed, vendar upam, da če sem uspel za vas, sem govoril o izzivu D2D. In D2D, ne le podatki, ki se nanašajo na odločitve v vseh kontekstih, gledamo na to na dnu tega diapozitiva, upam, da ga boste lahko videli, kar je omogočilo odkritja in povečanje dohodkovnih dolarjev iz naših analitičnih programov.


V tem kontekstu imam dejansko to vlogo tržnika pri sebi zdaj, ko delam, in to je; prva stvar, ki jo želite narediti, je opisati podatke, izvleči funkcije, izvleči lastnosti strank ali katere koli osebe, ki jo spremljate v svojem prostoru. Mogoče gre za bolnika v okolju zdravstvene analitike. Mogoče gre za spletnega uporabnika, če gledate na nekakšno spletno varnostno težavo. Toda karakterizirajte in izluščite značilnosti ter nato izvlecite nekaj konteksta o posamezniku, o tej entiteti. In potem zberete tiste koščke, ki ste jih pravkar ustvarili, in jih zložite v nekakšno zbirko, iz katere lahko nato uporabite algoritme strojnega učenja.


Razlog, da to povem tako, je to, da recimo, da imate na letališču nadzorno kamero. Sam video je ogromen, velik obseg in je tudi zelo nestrukturiran. Lahko pa izvlečete iz videonadzora, obrazne biometrije in prepoznate posameznike v nadzornih kamerah. Tako lahko na primer na letališču prepoznate določene osebe, lahko jih sledite po letališču tako, da istega človeka navzkrižno prepoznate v več nadzornih kamerah. Tako, da izvlečene biometrične lastnosti, ki jih res rudarite in spremljate, niso same podrobne videoposnetke. Ko pa imate te izvlečke, potem lahko uporabite odločitve o strojnem učenju in analitiki za odločanje o tem, ali morate v določenem primeru ukrepati ali se je nekaj zgodilo napačno ali kaj, kar imate priložnost ponuditi. Če imate na primer trgovino na letališču in vidite, da stranka prihaja tja in veste iz drugih informacij o tej stranki, da se je morda resnično zanimal za nakup stvari v brezcarinski trgovini oz. kaj takega, daj to ponudbo.


Kakšne stvari bi torej mislil z karakterizacijo in potencializacijo? Z karakterizacijo spet mislim na pridobivanje lastnosti in značilnosti v podatkih. In to je mogoče bodisi strojno ustvariti, potem lahko njeni algoritmi dejansko izvlečejo na primer biometrične podpise iz video analize ali analize sentimentov. Načrte strank lahko izvlečete prek spletnih pregledov ali družbenih medijev. Nekatere od teh stvari so morda ustvarjene s človekom, tako da lahko človek, poslovni analitik, pridobi dodatne funkcije, ki jih bom prikazal v naslednjem diapozitivu.


Nekatere od teh je mogoče množiti. In z množičnim izvajanjem obstaja veliko različnih načinov, kako lahko razmišljate o tem. Ampak zelo preprosto, na primer, vaši uporabniki pridejo na vaše spletno mesto in tam vnesejo iskalne besede, ključne besede in se končajo na določeni strani in dejansko tam preživijo čas na tej strani. Da vsaj vsaj razumejo, da si ogledajo, brskajo ali kliknejo stvari na tej strani. Kar vam pove, je, da je ključna beseda, ki so jo vnesli na samem začetku, deskriptor te strani, ker je kupca pristala na strani, ki jo pričakuje. In tako lahko dodate dodaten podatek, to je, da so stranke, ki uporabljajo to ključno besedo, to spletno stran v naši informacijski arhitekturi dejansko identificirale kot kraj, kjer se vsebina ujema s to ključno besedo.


In zato je množična ponudba še en vidik, ki ga ljudje včasih pozabijo, to je sledenje krušnim drobtinam vaših strank, tako rekoč; kako se premikajo po svojem prostoru, naj bo to spletna lastnina ali resnična lastnina. In nato uporabite takšno pot, ki jo stranka sprejme kot dodatne informacije o stvareh, ki jih gledamo.


Torej želim reči, da so stvari, ustvarjene s človekom, ali strojno ustvarjene, na koncu imele kontekst v obliki opomb ali označevanja določenih podatkovnih granul ali entitet. Ne glede na to, ali so ti subjekti bolniki v bolnišnici, stranke ali kaj drugega. In tako obstajajo različne vrste označevanja in pripomb. Nekaj ​​tega je o samih podatkih. To je ena od stvari, kakšne vrste informacij, kakšne informacije, kakšne lastnosti, oblike, morda teksture in vzorci, anomalija, neanomalna vedenja. In nato ekstrahiram nekaj semantike, torej kako se to nanaša na druge stvari, ki jih poznam, ali je ta kupec stranka elektronike. Ta stranka je stranka oblačil. Ali pa ta stranka rad kupuje glasbo.


Tako stranke, ki imajo radi glasbo, identificirajo nekaj semantike tega ponavadi radi zabavajo. Mogoče bi jim lahko ponudili kakšno drugo zabavno lastnino. Torej razumevanje semantike in tudi neke izvornosti, ki v bistvu pravi: od kod je prišlo, kdo je podal to trditev, kdaj, kateri datum in v kakšnih okoliščinah?


Ko imate vse te opombe in lastnosti, dodajte temu naslednji korak, ki je kontekst, vrsta kdo, kaj, kdaj, kje in zakaj. Kdo je uporabnik? Na katerem kanalu so prišli? Kakšen je bil vir informacij? Kakšne ponovne uporabe smo videli v tem konkretnem informacijskem ali podatkovnem izdelku? In kaj je, kakšna, vrednost v poslovnem procesu? Nato zberite te stvari in jih upravljajte ter dejansko pomagajte ustvariti bazo podatkov, če želite tako razmišljati. Omogočite jim iskanje, ponovno uporabo z drugimi poslovnimi analitiki ali avtomatiziranim postopkom, ki bo naslednjič, ko vidim te sklope funkcij, sistem samodejno ukrepal. In tako pridemo do takšne operativne analitične učinkovitosti, vendar bolj zbiramo koristne, izčrpne informacije in jih nato obdelamo za te primere uporabe.


Pripeljemo se k poslu. Izvajamo analitiko podatkov. Iščemo zanimive vzorce, presenečenja, novosti outliers, anomalije. Iščemo nove razrede in segmente v populaciji. Iščemo povezave in korelacije ter povezave med različnimi entitetami. In potem uporabimo vse to, da poganjamo naš postopek odkrivanja, odločanja in odločanja o dolarjih.


Torej, tu smo spet dobili zadnji podatkovni diapozitiv, ki ga imam samo v bistvu povzamem, tako da poslovni analitik ostane v zanki, spet tega človeka ne črpate in vse pomembno je, da človeka obdržite tam.


Torej te lastnosti, vse jih nudijo stroji ali človeški analitiki ali celo množično izvajanje. To kombinacijo stvari uporabljamo za izboljšanje svojih vadbenih sklopov za naše modele in na koncu s bolj natančnimi napovednimi modeli, manj lažnimi pozitivnimi in negativnimi učinki, učinkovitejšim vedenjem, učinkovitejšimi intervencijami pri naših kupcih ali kdor koli.


Na koncu dneva res samo kombiniramo strojno učenje in velike podatke s to močjo človekove kognicije, od koder prihaja takšen del komentarjev z označevanjem. In to lahko privede do vizualizacije in vizualne analitike orodij ali potopnih podatkovnih okolij ali množice. In na koncu dneva to, kar v resnici počne, ustvarja naše odkritje, vpogled in D2D. In to so moji komentarji, zato hvala za posluh.


Eric: Hej, to se sliši odlično in naj pustim naprej ključe doktorju Robinu Bloorju, da tudi on poda svojo perspektivo. Ja, rad bi slišal, da komentiraš ta koncept racionalizacije operacij in govoriš o operativni analitiki. Mislim, da je to veliko področje, ki ga je treba precej temeljito raziskati. In najbrž zelo hitro pred Robinom, vrnem te nazaj, Kirk. Zahteva, da imate nekaj pomembnih sodelovanj med različnimi akterji v podjetju, kajne? Morate govoriti z operativnimi ljudmi; moraš dobiti svoje tehnične ljudi. Včasih dobite svoje marketinške ljudi ali ljudi iz spletnega vmesnika. To so običajno različne skupine. Imate kakšne najboljše prakse ali predloge, kako nekako spraviti vsakogar v igro?


Dr Kirk: No, mislim, da to izhaja iz poslovne kulture sodelovanja. Pravzaprav govorim o treh vrstah C, ki so nekako analitične kulture. Eno je ustvarjalnost; drugo je radovednost in tretje je sodelovanje. Torej želite ustvarjalne, resne ljudi, vendar morate tudi te ljudi spraviti v sodelovanje. In res se začne od vrha, takega oblikovanja te kulture z ljudmi, ki bi morali odkrito deliti in sodelovati pri doseganju skupnih ciljev podjetja.


Eric: Vse skupaj je smiselno. In res moraš dobiti dobro vodstvo na vrhu, da se to zgodi. Torej, pojdimo naprej in jo izročimo doktorju Bloorju. Robin, tla so tvoja.


Dr. Robin Bloor: V redu. Hvala za uvod, Eric. V redu, kako to počnejo, kažejo, ker imamo dva analitika; Gledam predstavitev analitika, ki ga drugi fantje ne. Vedela sem, kaj bo Kirk rekel, in grem čisto drug kot, da se ne bomo preveč prekrivali.


Torej, o čemer pravzaprav govorim ali nameravam tukaj govoriti, je vloga analitika podatkov v primerjavi z vlogo poslovnega analitika. In način, kako ga do neke mere zaznamujem, je nekako stvar Jekyll in Hyde. Razlika je le v tem, da znanstveniki s podatki vsaj teoretično vedo, kaj počnejo. Medtem ko poslovni analitiki niso tako, v redu s tem, kako deluje matematika, v kaj se lahko zaupa in kaj ni mogoče zaupati.


Torej, spustimo se na razlog, da to počnemo, vzrok, da je analiza podatkov nenadoma postala velika stran, poleg tega, da lahko dejansko analiziramo zelo velike količine podatkov in potegnemo podatke zunaj organizacije; se splača. Na to, kako gledam na to - in mislim, da to šele postaja slučaj, vsekakor pa mislim, da gre za primer - je analiza podatkov resnično raziskava in razvoj. Tisto, kar dejansko delaš na tak ali drugačen način z analizo podatkov, je, da gledaš na poslovni proces naenkrat ali pa je to interakcija s stranko, pa naj bo to način poslovanja tvojega maloprodaje, način uvajanja vaše trgovine. V resnici ni pomembno, v čem je problem. Gledate določen poslovni postopek in ga poskušate izboljšati.


Rezultat uspešnih raziskav in razvoja je postopek sprememb. Če želite, si lahko omislite proizvodnjo, če želite, kot običajen primer tega. Ker v proizvodnji ljudje zbirajo informacije o vsem, da bi poskusili in izboljšali postopek izdelave. Mislim pa, da se vse, kar se dogaja ali kar se dogaja pri velikih podatkih, vse to zdaj uporablja za vsa podjetja na kakršen koli način, na kar si kdo lahko misli. Tako da je skoraj vsak poslovni proces pripravljen za pregled, če lahko zberete podatke o njem.


Torej to je ena stvar. Če želite, to pomeni vprašanje analize podatkov. Kaj lahko analiza podatkov naredi za podjetje? No, posel lahko popolnoma spremeni.


Ta poseben diagram, ki ga ne bom opisoval v nobeni globini, je pa to diagram, ki smo ga zasnovali kot vrhunec raziskovalnega projekta, ki smo ga naredili v prvih šestih mesecih tega leta. To je način predstavljanja velike podatkovne arhitekture. In še nekaj stvari, ki jih je vredno izpostaviti, preden nadaljujem na naslednji diapozitiv. Tu sta dva pretoka podatkov. Eden od njih je tok podatkov v realnem času, ki poteka po vrhu diagrama. Drugi je počasnejši tok podatkov, ki poteka po dnu diagrama.


Poglejte na dnu diagrama. Hadoop imamo kot hranilnik podatkov. Imamo različne baze podatkov. Tam imamo cel podatek, na njem pa se dogaja cel kup dejavnosti, med katerimi je večina analitičnih dejavnosti.


Bistvo, ki ga tu delam, in edino, kar tukaj resnično želim, je, da je tehnologija težka. Ni preprosto. Ni lahko. Pravzaprav vsak, ki je nov v igri, ne more samo sestaviti. To je dokaj zapleteno. In če boste ustanovili podjetje za izvajanje zanesljive analitike v vseh teh procesih, potem se to ne bo zgodilo posebej hitro. Za mešanje bo treba dodati veliko tehnologije.


V redu. Vprašanje, kaj je podatkovni znanstvenik, bi lahko trdil, da sem podatkovni znanstvenik, ker sem bil dejansko izučen iz statistike, preden sem se kdaj usposobil za računalništvo. In aktuarsko delo sem opravljal nekaj časa, tako da vem način organizacije podjetja, statistične analize, tudi zato, da bi se sam vodil. To ni nepomembna stvar. In s človeške in tehnološke strani je veliko najboljših praks.


Torej, ko sem postavil vprašanje "kaj je podatkovni znanstvenik", sem Frankensteinovo sliko postavil preprosto zato, ker gre za kombinacijo stvari, ki jih je treba povezati. Vključeno je vodenje projektov. V statistiki je globoko razumevanje. Obstaja domensko poslovno znanje, ki je večja težava poslovnega analitika kot podatkovnega znanstvenika. Obstajajo izkušnje ali potreba po razumevanju podatkovne arhitekture in, da lahko zgradite arhitekta podatkov in je vključena programska oprema. Z drugimi besedami, verjetno gre za ekipo. Verjetno ni posameznik. In to pomeni, da gre verjetno za oddelek, ki ga je treba organizirati in o njegovi organizaciji je treba razmišljati dokaj obsežno.


Vmetavanje v mešanico dejstva strojnega učenja. Mislim, da strojno učenje ni novo v smislu, da je večina statističnih tehnik, ki se uporabljajo pri strojnem učenju, znana že desetletja. Kar nekaj je novih, mislim, da so nevronske mreže razmeroma nove, mislim, da so stare le približno 20 let, zato so nekatere razmeroma nove. Toda težava pri strojnem učenju je bila, da v resnici nismo imeli računalniške moči za to. Kar se je zgodilo, razen vsega drugega, je, da je računalniška moč zdaj na mestu. In to pomeni ogromno tega, kar smo, recimo, znanstveniki podatkov storili že prej v smislu modeliranja situacij, vzorčenja podatkov in nato predelave, da bi naredili globljo analizo podatkov. Pravzaprav lahko napajanje računalnika v nekaterih primerih preprosto vržemo nanj. Samo izberite algoritme strojnega učenja, vrzite jih na podatke in poglejte, kaj se izkaže. In to lahko naredi poslovni analitik, kajne? Toda poslovni analitik mora razumeti, kaj počnejo. Mislim, mislim, da je to res vprašanje, bolj kot karkoli drugega.


No, to je samo to, da o podjetju vemo več o njegovih podatkih kot pa na kakšen drug način. Einstein tega ni rekel, to sem rekel. Samo njegovo sliko sem postavil za verodostojnost. Toda situacija se dejansko začne razvijati tista, kjer bo tehnologija, če se pravilno uporablja, in matematika, če se pravilno uporablja, lahko vodila podjetje kot vsak posameznik. To smo opazovali pri IBM-u. Najprej je lahko v šahu premagal najboljše fante, nato pa je lahko v Jeopardyju premagal najboljše; sčasoma bomo lahko premagali najboljše fante v vodenju podjetja. Statistični podatki bodo na koncu zmagali. In težko je razumeti, kako se to ne bo zgodilo, le še se ni zgodilo.


Torej, kar govorim, in to je nekakšno popolno sporočilo moje predstavitve, ali sta to dve zadevi podjetja. Prva je, ali lahko pravilno razumete tehnologijo? Ali lahko tehnologijo delate za ekipo, ki ji bo pravzaprav lahko predsedovala in pridobila koristi za podjetje? In potem drugič, ali lahko spraviš ljudi? In oboje je to vprašanje. In to so vprašanja, ki do zdaj niso, pravijo, da so rešena.


V redu, Eric, vrnem ti ga. Ali pa bi ga morda posredoval Willu.


Eric: Pravzaprav ja. Hvala, Will Gorman. Ja, tako boš, Will. Pa poglejmo. Naj vam dam ključ za WebEx. Torej, kaj se dogaja? Pentaho, očitno, ljudje ste že nekaj časa in odprtokodni BI-jev je tam, kjer ste začeli. Imate pa veliko več, kot ste ga imeli nekoč, zato poglejmo, kaj imate danes za analitiko.


Will Gorman: Vsekakor. Pozdravljeni vsi! Moje ime je Will Gorman. Jaz sem glavni arhitekt v Pentahu. Za tiste, ki niste slišali za nas, sem omenil, da je Pentaho veliko podjetje za integracijo in analitiko podatkov. V poslu smo že deset let. Naši izdelki so se razvijali vzporedno z veliko podatkovno skupnostjo, začenši z odprtokodno platformo za integracijo podatkov in analitiko, inovacijo s tehnologijami, kot sta Hadoop in NoSQL, še preden so se okoli teh tehnologij oblikovali komercialni subjekti. Zdaj imamo več kot 1500 komercialnih kupcev in veliko več proizvodnih sestankov kot rezultat naših inovacij v zvezi z odprtokodno kodo.


Naša arhitektura je zelo vgradljiva in razširljiva, namensko zasnovana tako, da je fleksibilna, saj se tehnologija velikih podatkov še posebej razvija zelo hitro. Pentaho ponuja tri glavna področja izdelkov, ki sodelujejo pri reševanju primerov uporabe velikih podatkov v analizi.


Prvi izdelek na področju naše arhitekture je Pentaho Data Integration, ki je usmerjen v podatkovnega tehnologa in podatkovne inženirje. Ta izdelek ponuja vizualno izkušnjo povleci in spusti za definiranje podatkovnih cevovodov in procesov za orkestriranje podatkov tudi v velikih podatkovnih in tradicionalnih okoljih. Ta izdelek je lahka, metapodatkovna baza, platforma za integracijo podatkov, zgrajena na Javi in ​​jo je mogoče namestiti kot postopek v MapReduce ali YARN ali Storm in na številnih drugih paketnih in v realnem času platformah.


Naše drugo področje izdelkov je okoli vizualne analitike. S to tehnologijo lahko organizacije in proizvajalci originalnih izdelkov ponujajo bogato vizualizacijo in analitično izkušnjo za poslovne analitike in poslovne uporabnike s pomočjo sodobnih brskalnikov in tabličnih računalnikov, kar omogoča ad hoc ustvarjanje poročil in nadzornih plošč. Kot tudi predstavitev slikovne plošče in poročil, ki je popolna za slikovne pike.


Naše tretje področje izdelkov se osredotoča na prediktivno analitiko, namenjeno znanstvenikom podatkov, algoritmom strojnega učenja. Kot smo že omenili, lahko tudi nevronske mreže in podobno vključimo v okolje za preoblikovanje podatkov, kar podatkovnim znanstvenikom omogoča prehod iz modeliranja v proizvodno okolje, kar omogoča dostop do napovedi in to lahko zelo hitro, zelo hitro vpliva na poslovne procese.


Vsi ti izdelki so tesno integrirani v eno samo agilno izkušnjo in našim podjetnim strankam omogočajo prilagodljivost, ki jo potrebujejo za reševanje poslovnih težav. Opažamo hitro razvijajočo se pokrajino velikih podatkov v tradicionalnih tehnologijah. Od nekaterih podjetij iz velikega podatkovnega prostora slišimo, da se EDW bliža koncu. Pravzaprav je to, kar vidimo pri naših podjetniških kupcih, da morajo vnesti velike podatke v obstoječe poslovne in IT procese in jih ne nadomestiti.


Ta preprost diagram prikazuje točko v arhitekturi, ki jo pogosto vidimo, to je vrsta arhitekture uvajanja EDW z integracijo podatkov in primeri uporabe BI. Zdaj je ta diagram podoben Robinim diapozitivom o veliki podatkovni arhitekturi, saj vsebuje podatke v realnem času in zgodovinske podatke. Ko se pojavljajo novi viri podatkov in zahteve v realnem času, vidimo velike podatke kot dodaten del celotne IT arhitekture. Ti novi viri podatkov vključujejo strojno ustvarjene podatke, nestrukturirane podatke, standardni obseg in hitrost ter raznolikost zahtev, ki jih slišimo pri velikih podatkih; ne ustrezajo tradicionalnim procesom EDW. Pentaho tesno sodeluje s Hadoopom in NoSQL, da bi poenostavil zaužitje, obdelavo podatkov in vizualizacijo teh podatkov ter mešanje teh podatkov s tradicionalnimi viri, da bi kupcem omogočili popoln vpogled v njihovo podatkovno okolje. To počnemo na urejen način, tako da IT lahko ponudi popolno analitično rešitev za svojo dejavnost.


Na koncu želim poudariti našo filozofijo glede velike analize in integracije podatkov; verjamemo, da te tehnologije bolje sodelujejo z enotno arhitekturo, kar omogoča številne primere uporabe, ki sicer ne bi bili možni. Podatkovna okolja naših strank so več kot le veliki podatki, Hadoop in NoSQL. Vsi podatki so poštena igra. In veliki viri podatkov morajo biti na voljo in sodelovati, da vplivajo na poslovno vrednost.


Nenazadnje menimo, da je za učinkovito reševanje teh poslovnih težav v podjetjih s pomočjo podatkov, IT in poslovnih področij potrebno delovati skupaj z vodenim, mešanim pristopom k analizi velikih podatkov. Najlepša hvala, ker ste nam dali čas za pogovor, Eric.


Eric: Staviš. Ne, to je dobro. Želim se vrniti na tisto stran vaše arhitekture, ko pridemo do vprašanj in vprašanj. Pa pojdimo skozi preostanek predstavitve in najlepša hvala za to. Fantje se zagotovo hitro premikate zadnjih nekaj let, to moram reči zagotovo.


Torej, Steve, naj grem naprej in ti ga izročim. In tam kliknite na puščico navzdol in pojdite po njej. Torej, Steve, dam ti ključe. Steve Wilkes, samo kliknite tisto najbolj oddaljeno puščico na tipkovnici.


Steve Wilkes: Tu smo.


Eric: Tako.


Steve: To je super uvod, ki ste mi ga dali.


Eric: Ja.


Steve: Torej sem Steve Wilkes. Jaz sem CCO pri WebAction. Bili smo šele zadnjih nekaj let in zagotovo smo se od takrat naprej tudi hitro gibali. WebAction je platforma za analitiko velikih podatkov v realnem času. Eric je že prej omenil, kako pomemben je realni čas in kako v realnem času dobivajo vaše prijave. Naša platforma je zasnovana za izdelavo aplikacij v realnem času. In omogočiti naslednji generaciji aplikacij, ki jih poganjajo podatki, ki jih je mogoče graditi postopno in omogočiti ljudem, da gradijo nadzorne plošče iz podatkov, pridobljenih iz teh aplikacij, vendar s poudarkom na realnem času.


Naša platforma je pravzaprav celovita platforma od konca do konca, ki naredi vse od zbiranja podatkov, obdelave podatkov, pa vse do vizualizacije podatkov. Omogoča več različnih vrst ljudi v našem podjetju, da skupaj ustvarijo prave aplikacije v realnem času in jim dajo vpogled v dogajanje v njihovem podjetju, kot se je zgodilo.


In to se nekoliko razlikuje od tistega, kar je večina ljudi videla pri velikih podatkih, tako da je bil tradicionalni pristop - no, tradicionalen zadnjih nekaj let - pristop z velikimi podatki, da bi ga zajeli iz celega števila različnih virov in nato ga zložite v velik rezervoar ali jezero ali karkoli želite. Nato ga obdelajte, ko morate pognati poizvedbo; izvajati obsežne zgodovinske analize ali celo samo ad hoc poizvedovanje o velikih količinah podatkov. Zdaj to deluje za določene primere uporabe. Če pa želite biti proaktivni v svojem podjetju, če želite dejansko povedati, kaj se dogaja, namesto da ugotovite, kdaj je šlo kaj narobe proti koncu dneva ali koncu tedna, se morate resnično premakniti v realnem času.


In to malo spremeni stvari naokoli. Obdelavo premakne na sredino. Tako učinkovito jemljete tok velikih količin podatkov, ki se nenehno ustvarjajo v podjetju in jih obdelujete, ko jih dobite. In ker ga obdelujete tako, kot ga dobite, vam ni treba vsega shranjevati. Lahko samo shranite pomembne podatke ali stvari, ki jih morate zapomniti, da se je dejansko zgodilo. Če torej sledite GPS lokaciji vozil, ki se premikajo po cesti, vas res ne zanima, kje so vsako sekundo, ni vam treba shranjevati, kje so vsako sekundo. Samo skrbeti morate, ali so zapustili to mesto? So prispeli na to mesto? So avtocesto zapeljali ali ne?


Zato je res pomembno upoštevati, da ko se pridobiva vse več podatkov, potem so trije Vs. Hitrost v osnovi določa, koliko podatkov ustvari vsak dan. Več podatkov, ki se ustvari, več morate shraniti. In več morate shraniti, dlje časa je potrebno obdelati. Če pa ga lahko obdelaš, kot ga dobiš, potem dobiš res veliko korist in lahko na to reagiraš. Lahko vam rečejo, da se stvari dogajajo, namesto da jih pozneje iščete.


Torej je naša platforma zasnovana tako, da je zelo razširljiva. Ima tri glavne kose - prevzemni kos, kos za obdelavo in nato kosov za vizualizacijo platforme. Na strani pridobitve ne gledamo samo na strojno ustvarjene podatke dnevnikov, kot so spletni dnevniki ali aplikacije, ki vsebujejo vse druge dnevnike, ki se ustvarjajo. We can also go in and do change data capture from databases. So that basically enables us to, we've seen the ETL side that Will presented and traditional ETL you have to run queries against the databases. We can be told when things happen in the database. We change it and we capture it and receive those events. And then there's obviously the social feeds and live device data that's being pumped to you over TCP or ACDP sockets.


There's tons of different ways of getting data. And talking of volume and velocity, we're seeing volumes that are billions of events per day, right? So it's large, large amounts of data that is coming in and needs to be processed.


That is processed by a cluster of our servers. The servers all have the same architecture and are all capable of doing the same things. But you can configure them to, sort of, do different things. And within the servers we have a high-speed query processing layer that enables you to do some real-time analytics on the data, to do enrichments of the data, to do event correlation, to track things happening within time windows, to do predictive analytics based on patterns that are being seen in the data. And that data can then be stored in a variety places - the traditional RDBMS, enterprise data warehouse, Hadoop, big data infrastructure.


And the same live data can also be used to power real-time data-driven apps. Those apps can have a real-time view of what's going on and people can also be alerted when important things happen. So rather than having to go in at the end of the day and find out that something bad really happened earlier on the day, you could be alerted about it the second we spot it and it goes straight to the page draw down to find out what's going on.


So it changes the paradigm completely from having to analyze data after the fact to being told when interesting things are happening. And our platform can then be used to build data-driven applications. And this is really where we're focusing, is building out these applications. For customers, with customers, with a variety of different partners to show true value in real-time data analysis. So that allows people that, or companies that do site applications, for example, to be able track customer usage over time and ensure that the quality of service is being met, to spot real-time fraud or money laundering, to spot multiple logins or hack attempts and those kind of security events, to manage things like set-top boxes or other devices, ATM machines to monitor them in real time for faults, failures that have happened, could happen, will happen in the future based on predictive analysis. And that goes back to the point of streamlining operations that Eric mentioned earlier, to be able to spot when something's going to happen and organize your business to fix those things rather than having to call someone out to actually do something after the fact, which is a lot more expensive.


Consumer analytics is another piece to be able to know when a customer is doing something while they're still there in your store. Data sent to management to be able to in real time monitor resource usage and change where things are running and to be able to know about when things are going to fail in a much more timely fashion.


So that's our products in a nutshell and I'm sure we'll come back to some of these things in the Q&A session. Hvala vam.


Eric: Yes, indeed. Odlično opravljeno. Okay good. And now next stop in our lightning round, we've got Frank Sanders calling in from MarkLogic. I've known about these guys for a number of years, a very, very interesting database technology. So Frank, I'm turning it over to you. Just click anywhere in that. Use the down arrow on your keyboard and you're off to the races. Izvolite.


Frank Sanders: Thank you very much, Eric. So as Eric mentioned, I'm with a company called MarkLogic. And what MarkLogic does is we provide an enterprise NoSQL database. And perhaps, the most important capability that we bring to the table with regards to that is the ability to actually bring all of these disparate sources of information together in order to analyze, search and utilize that information in a system similar to what you're used to with traditional relational systems, right?


And some of the key features that we bring to the table in that regard are all of the enterprise features that you'd expect from a traditional database management system, your security, your HA, your DR, your backup are in store, your asset transactions. As well as the design that allows you to scale out either on the cloud or in the commodity hardware so that you can handle the volume and the velocity of the information that you're going to have to handle in order to build and analyze this sort of information.


And perhaps, the most important capability is that fact that we're scheme agnostic. What that means, practically, is that you don't have to decide what your data is going to look like when you start building your applications or when you start pulling those informations together. But over time, you can incorporate new data sources, pull additional information in and then use leverage and query and analyze that information just as you would with anything that was there from the time that you started the design. V redu?


So how do we do that? How do we actually enable you to load different sorts of information, whether it be text, RDF triples, geospatial data, temporal data, structured data and values, or binaries. And the answer is that we've actually built our server from the ground up to incorporate search technology which allows you to put information in and that information self describes and it allows you to query, retrieve and search that information regardless of its source or format.


And what that means practically is that - and why this is important when you're doing analysis - is that analytics and information is most important ones when it's properly contextualized and targeted, right? So a very important key part of any sort of analytics is search, and the key part is search analytics. You can't really have one without the other and successfully achieve what you set out to achieve. Prav?


And I'm going to talk briefly about three and a half different use cases of customers that we have at production that are using MarkLogic to power this sort of analytics. V redu. So the first such customer is Fairfax County. And Fairfax County has actually built two separate applications. One is based around permitting and property management. And the other, which is probably a bit more interesting, is the Fairfax County police events application. What the police events application actually does is it pulls information together like police reports, citizen reports and complaints, Tweets, other information they have such as sex offenders and whatever other information that they have access to from other agencies and sources. Then they allow them to visualize that and present this to the citizens so they can do searches and look at various crime activity, police activity, all through one unified geospatial index, right? So you can ask questions like, "what is the crime rate within five miles" or "what crimes occurred within five miles of my location?" V redu.


Another user that we've got, another customer that we have is OECD. Why OECD is important to this conversation is because in addition to everything that we've enabled for Fairfax County in terms of pulling together information, right; all the information that you would get from all various countries that are members of the OECD that they report on from an economic perspective. We actually laid a target drill into that, right. So you can see on the left-hand side we're taking the view of Denmark specifically and you can kind of see a flower petal above it that rates it on different axes. Prav? And that's all well and good. But what the OECD has done is they've gone a step further.


In addition to these beautiful visualizations and pulling all these information together, they're actually allowing you in real time to create your own better life index, right, which you can see on the right-hand side. So what you have there is you have a set of sliders that actually allow you to do things like rank how important housing is to you or income, jobs, community, education, environment, civic engagement, health, life satisfaction, safety and your work/life balance. And dynamically based on how you are actually inputting that information and weighting those things, MarkLogic's using its real-time indexing capability and query capability to actually then change how each and every one of these countries is ranked to give you an idea of how well your country or your lifestyle maps through a given country. V redu?


And the final example that I'm going to share is MarkMail. And what MarkMail really tries to demonstrate is that we can provide these capabilities and you can do the sort of analysis not only on structured information or information that's coming in that's numerical but actually on more loosely structured, unstructured information, right? Things like emails. And what we've seen here is we're actually pulling information like geolocation, sender, company, stacks and concepts like Hadoop being mentioned within the context of an email and then visualizing it on the map as well as looking at who those individuals and what list across that, a sent and a date. This where you're looking at things that are traditionally not structured, that may be loosely structured, but are still able to derive some structured analysis from that information without having to go to a great length to actually try and structure it or process it at a time. In to je to.


Eric: Hey, okay good. And we got one more. We've got Hannah Smalltree from Treasure Data, a very interesting company. And this is a lot of great content, folks. Thank you so much for all of you for bringing such good slides and such good detail. So Hannah, I just gave the keys to you, click anywhere and use the down arrow on your keyboard. You got it. Vzemi stran.


Hannah Smalltree: Thank you so much, Eric. This is Hannah Smalltree from Treasure Data. I'm a director with Treasure Data but I have a past as a tech journalist, which means that I appreciate two things. First of all, these can be long to sit through a lot of different descriptions of technology, and it can all sound like it runs together so I really want to focus on our differentiator. And the real-world applications are really important so I appreciate that all of my peers have been great about providing those.


Treasure Data is a new kind of big data service. We're delivered entirely on the cloud in a software as a service or managed-service model. So to Dr. Bloor's point earlier, this technology can be really hard and it can be very time consuming to get up and running. With Treasure Data, you can get all of these kinds of capabilities that you might get in a Hadoop environment or a complicated on-premise environment in the cloud very quickly, which is really helpful for these new big data initiatives.


Now we talk about our service in a few different phases. We offer some very unique collection capabilities for collecting streaming data so particularly event data, other kinds of real-time data. We'll talk a little bit more about those data types. That is a big differentiator for our service. As you get into big data or if you are already in it then you know that collecting this data is not trivial. When you think about a car with 100 sensors sending data every minute, even those 100 sensors sending data every ten minutes, that adds up really quickly as you start to multiply the amount of products that you have out there with sensors and it quickly becomes very difficult to manage. So we are talking with customers who have millions, we have customers who have billions of rows of data a day that they're sending us. And they're doing that as an alternative to try and to manage that themselves in a complicated Amazon infrastructure or even try to bring it into their own environment.


We have our own cloud storage environment. We manage it. We monitor it. We have a team of people that's doing all that tuning for you. And so the data flows in, it goes into our managed storage environment.


Then we have embedded query engines so that your analyst can go in and run queries and do some initial data discovery and exploration against the data. We have a couple of different query engines for it actually now. You can use SQL syntax, which your analysts probably know and love, to do some basic data discovery, to do some more complex analytics that are user-defined functions or even to do things as simple as aggregate that data and make it smaller so that you can bring it into your existing data warehouse environment.


You can also connect your existing BI tools, your Tableau, is a big partner of ours; but really most BIs, visualization or analytics tools can connect via our industry standard JDBC and ODBC drivers. So it gives you this complete set of big data capabilities. You're allowed to export your queries results or data sets anytime for free, so you can easily integrate that data. Treat this as a data refinery. I like to think of it more as a refinery than a lake because you can actually do stuff with it. You can go through, find the valuable information and then bring it into your enterprise processes.


The next slide, we talk about the three Vs of big data - some people say four or five. Our customers tend to struggle with the volume and velocity of the data coming at them. And so to get specific about the data types - Clickstream, Web access logs, mobile data is a big area for us, mobile application logs, application logs from custom Web apps or other applications, event logs. And increasingly, we have a lot of customers dealing with sensor data, so from wearable devices, from products, from automotive, and other types of machine data. So when I say big data, that's the type of big data that I'm talking about.


Now, a few use cases in perspective for you - we work with a retailer, a large retailer. They are very well known in Asia. They're expanding here in the US. You'll start to see stores; they're often called Asian IKEA, so, simple design. They have a loyalty app and a website. And in fact, using Treasure Data, they were able to deploy that loyalty app very quickly. Our customers get up and running within days or weeks because of our software and our service architecture and because we have all of the people doing all of that hard work behind the scenes to give you all of those capabilities as a service.


So they use our service for mobile application analytics looking at the behavior, what people are clicking on in their mobile loyalty application. They look at the website clicks and they combine that with our e-commerce and POS data to design more efficient promotions. They actually wanted to drive people into stores because they found that people, when they go into stores spend more money and I'm like that; to pick up things, you spend more money.


Another use case that we're seeing in digital video games, incredible agility. They want to see exactly what is happening in their game, and make changes to that game even within hours of its release. So for them, that real-time view is incredibly important. We just released a game but we noticed in the first hour that everyone is dropping off at Level 2; how are we going to change that? They might change that within the same day. So real time is very important. They're sending us billions of event logs per day. But that could be any kind of mobile application where you want some kind of real-time view into how somebody's using that.


And finally, a big area for us is our product behavior and sensor analytics. So with sensor data that's in cars, that's in other kinds of machines, utilities, that's another area for us, in wearable devices. We have research and development teams that want to quickly know what the impact of a change to a product is or people interested in the behavior of how people are interacting with the product. And we have a lot more use cases which, of course, we're happy to share with you.


And then finally, just show you how this can fit into your environment, we offer again the capability to collect that data. We have very unique collection technology. So again, if real-time collection is something that you're struggling with or you anticipate struggling with, please come look at the Treasure Data service. We have really made capabilities for collecting streaming data. You can also bulk load your data, store it, analyze it with our embedded query engines and then, as I mentioned, you can export it right to your data warehouse. I think Will mentioned the need to introduce big data into your existing processes. So not go around or create a new silo, but how do you make that data smaller and then move it into your data warehouse and you can connect to your BI, visualization and advanced analytics tools.


But perhaps, the key points I want to leave you with are that we are managed service, that's software as a service; it's very cost effective. A monthly subscription service starting at a few thousand dollars a month and we'll get you up and running in a matter of days or weeks. So compare that with the cost of months and months of building your own infrastructure and hiring those people and finding it and spending all that time on infrastructure. If you're experimenting or if you need something yesterday, you can get up and running really quickly with Treasure Data.


And I'm just pointing you to our website and to our starter service. If you're a hands-on person who likes to play, please check out our starter service. You can get on, no credit card required, just name and email, and you can play with our sample data, load up your own data and really get a sense of what we're talking about. So thanks so much. Also, check our website. We were named the Gartner Cool Vendor in Big Data this year, very proud of that. And you can also get a copy of that report for free on our website as well as many other analyst white papers. So thanks so much.


Eric: Okay, thank you very much. We've got some time for questions here, folks. We'll go a little bit long too because we've got a bunch of folks still on the line here. And I know I've got some questions myself, so let me go ahead and take back control and then I'm going to ask a couple of questions. Robin and Kirk, feel free to dive in as you see fit.


So let me go ahead and jump right to one of these first slides that I checked out from Pentaho. So here, I love this evolving big data architecture, can you kind of talk about how it is that this kind of fits together at a company? Because obviously, you go into some fairly large organization, even a mid-size company, and you're going to have some people who already have some of this stuff; how do you piece this all together? Like what does the application look like that helps you stitch all this stuff together and then what does the interface look like?


Will: Great question. The interfaces are a variety depending on the personas involved. But as an example, we like to tell the story of - one of the panelists mentioned the data refinery use case - we see that a lot in customers.


One of our customer examples that we talk about is Paytronix, where they have that traditional EDW data mart environment. They are also introducing Hadoop, Cloudera in particular, and with various user experiences in that. So first there's an engineering experience, so how do you wire all these things up together? How do you create the glue between the Hadoop environment and EDW?


And then you have the business user experience which we talked about, a number of BI tools out there, right? Pentaho has a more embeddable OEM BI tool but there are great ones out there like Tableau and Excel, for instance, where folks want to explore the data. But usually, we want to make sure that the data is governed, right? One of the questions in the discussions, what about single-version experience, how do you manage that, and without the technology like Pentaho data integration to blend that data together not on the glass but in the IT environments. So it really protects and governs the data and allows for a single experience for the business analyst and business users.


Eric: Okay, good. That's a good answer to a difficult question, quite frankly. And let me just ask the question to each of the presenters and then maybe Robin and Kirk if you guys want to jump in too. So I'd like to go ahead and push this slide for WebAction which I do think is really a very interesting company. Actually, I know Sami Akbay who is one of the co-founders, as well. I remember talking to him a couple years ago and saying, "Hey man, what are you doing? What are you up to? I know you've got to be working on something." And of course, he was. He was working on WebAction, under the covers here.


A question came in for you, Steve, so I'll throw it over to you, of data cleansing, right? Can you talk about these components of this real-time capability? How do you deal with issues like data cleansing or data quality or how does that even work?


Steve: So it really depends on where you're getting your feeds from. Typically, if you're getting your feeds from a database as you change data capture then, again, it depends there on how the data was entered. Data cleansing really becomes a problem when you're getting your data from multiple sources or people are entering it manually or you kind of have arbitrary texts that you have to try and pull things out of. And that could certainly be part of the process, although that type simply doesn't lend itself to true, kind of, high-speed real-time processing. Data cleansing, typically, is an expensive process.


So it may well be that that could be done after the fact in the store site. But the other thing that the platform is really, really good at is correlation, so in correlation and enrichment of data. You can, in real time, correlate the incoming data and check to see whether it matches a certain pattern or it matches data that's being retrieved from a database or Hadoop or some other store. So you can correlate it with historical data, is one thing you could do.


The other thing that you can do is basically do analysis on that data and see whether it kind of matches certain required patterns. And that's something that you can also do in real time. But the traditional kind of data cleansing, where you're correcting company names or you're correcting addresses and all those types of things, those should probably be done in the source or kind of after the fact, which is very expensive and you pray that they won't do those in real time.


Eric: Yeah. And you guys are really trying to address the, of course, the real-time nature of things but also get the people in time. And we talked about, right, I mentioned at the top of the hour, this whole window of opportunity and you're really targeting specific applications at companies where you can pull together data not going the usual route, going this alternate route and do so in such a low latency that you can keep customers. For example, you can keep people satisfied and it's interesting, when I talked to Sami at length about what you guys are doing, he made a really good point. He said, if you look at a lot of the new Web-based applications; let's look at things like Twitter, Bitly or some of these other apps; they're very different than the old applications that we looked at from, say, Microsoft like Microsoft Word.


I often use Microsoft as sort of a whipping boy and specifically Word to talk about the evolution of software. Because Microsoft Word started out as, of course, a word processing program. I'm one of those people who remember Word Perfect. I loved being able to do the reveal keys or the reveal code, basically, which is where you could see the actual code in there. You could clean something up if your bulleted list was wrong, you can clean it up. Well, Word doesn't let you do that. And I can tell you that Word embeds a mountain of code inside every page that you do. If anyone doesn't believe me, then go to Microsoft Word, type "Hello World" and then do "Export as" or "Save as" .html. Then open that document in a text editor and that will be about four pages long of codes just for two words.


So you guys, I thought it was very interesting and it's time we talked about that. And that's where you guys focus on, right, is identifying what you might call cross-platform or cross-enterprise or cross-domain opportunities to pull data together in such quick time that you can change the game, right?


Steve: Yeah, absolutely. And one of the keys that, I think, you did elude to, anyway, is you really want to know about things happening before your customers do or before they really, really become a problem. As an example are the set-top boxes. Cable boxes, they emit telemetry all the time, loads and loads of telemetry. And not just kind of the health of the box but it's what you're watching and all that kind of stuff, right? The typical pattern is you wait till the box fails and then you call your cable provider and they'll say, "Well, we will get to you sometime between the hours of 6am and 11pm in the entire month of November." That isn't a really good customer experience.


But if they could analyze that telemetry in real time then they could start to do things like that we know these boxes are likely to fail in the next week based historical patterns. Therefore we'll schedule our cable repair guy to turn up at this person's house prior to it failing. And we'll do that in a way that suits us rather than having to send him from Santa Cruz up to Sunnyvale. We'll schedule everything in a nice order, traveling salesman pattern, etc., so that we can optimize our business. And so the customer is happy because they don't have a failing cable box. And the cable provider is happy because they have just streamlined things and they don't have to send people all over the place. That's just a very quick example. But there are tons and tons of examples where knowing about things as they happen, before they happen, can save companies a fortune and really, really improve their customer relations.


Eric: Yeah, right. No doubt about it. Let's go ahead and move right on to MarkLogic. As I mentioned before, I've known about these guys for quite some time and so I'll bring you into this, Frank. You guys were far ahead of the whole big data movement in terms of building out your application, it's really database. But building it out and you talked about the importance of search.


So a lot of people who followed the space know that a lot of the NoSQL tools out there are now bolting on search capabilities whether through third parties or they try to do their own. But to have that search already embedded in that, baked-in so to speak, really is a big deal. Because if you think about it, if you don't have SQL, well then how do you go in and search the data? How do you pull from that data resource? And the answer is to typically use search to get to the data that you're looking for, right?


So I think that's one of the key differentiators for you guys aside being able to pull data from all these different sources and store that data and really facilitate this sort of hybrid environment. I'm thinking that search capability is a big deal for you, right?


Frank: Yeah, absolutely. In fact, that's the only way to solve the problem consistently when you don't know what all the data is going to look like, right? If you cannot possibly imagine all the possibilities then the only way to make sure that you can locate all the information that you want, that you can locate it consistently and you can locate it regardless of how you evolve your data model and your data sets is to make sure you give people generic tools that allow them to interrogate that data. And the easiest, most intuitive way to do that is through a search paradigm, right? And through the same approach in search takes where we created an inverted index. You have entries where you can actually look into those and then find records and documents and rows that actually contain the information you're looking for to then return it to the customer and allow them to process it as they see fit.


Eric: Yeah and we talked about this a lot, but you're giving me a really good opportunity to kind of dig into it - the whole search and discovery side of this equation. But first of all, it's a lot of fun. For anyone who likes that stuff, this is the fun part, right? But the other side of the equation or the other side of the coin, I should say, is that it really is an iterative process. And you got to be able to - here I'll be using some of the marketing language - have that conversation with the data, right? In other words, you need to be able to test the hypothesis, play around with it and see how that works. Maybe that's not there, test something else and constantly change things and iterate and search and research and just think about stuff. And that's a process. And if you have big hurdles, meaning long latencies or a difficult user interface or you got to go ask IT; that just kills the whole analytical experience, right?


So it's important to have this kind of flexibility and to be able to use searches. And I like the way that you depicted it here because if we're looking at searching around different, sort of, concepts or keys, if you will, key values and they're different dimensions. You want to be able to mix and match that stuff in order to enable your analyst to find useful stuff, right?


Frank: Yeah, absolutely. I mean, hierarchy is an important thing as well, right? So that when you include something like a title, right, or a specific term or value, that you can actually point to the correct one. So if you're looking for a title of an article, you're not getting titles of books, right? Or you're not getting titles of blog posts. The ability to distinguish between those and through the hierarchy of the information is important as well.


You pointed out earlier the development, absolutely, right? The ability for our customers to actually pull in new data sources in a matter of hours, start to work with them, evaluate whether or not they're useful and then either continue to integrate them or leave them by the wayside is extremely valuable. When you compare it to a more traditional application development approach where what you end up doing is you have to figure out what data you want to ingest, source the data, figure out how you're going to fit it in your existing data model or model that in, change that data model to incorporate it and then actually begin the development, right? Where we kind of turn that on our head and say just bring it to us, allow you to start doing the development with it and then decide later whether or not you want to keep it or almost immediately whether or not it's of value.


Eric: Yeah, it's a really good point. That's a good point. So let me go ahead and bring in our fourth presenter here, Treasure Data. I love these guys. I didn't know much about them so I'm kind of kicking myself. And then Hannah came to us and told us what they were doing. And Hannah mentioned, she was a media person and she went over to the dark side.


Hannah: I did, I defected.


Eric: That's okay, though, because you know what we like in the media world. So it's always nice when a media person goes over to the vendor side because you understand, hey, this stuff is not that easy to articulate and it can be difficult to ascertain from a website exactly what this product does versus what that product does. And what you guys are talking about is really quite interesting. Now, you are a cloud-managed service. So any data that someone wants to use they upload to your cloud, is that right? And then you will ETL or CDC, additional data up to the cloud, is that how that works?


Hannah: Well, yeah. So let me make an important distinction. Most of the data, the big data, that our customers are sending us is already outside the firewall - mobile data, sensor data that's in products. And so we're often used as an interim staging area. So data is not often coming from somebody's enterprise into our service so much as it's flowing from a website, a mobile application, a product with lots of sensors in it - into our cloud environment.


Now if you'd like to enrich that big data in our environment, you can definitely bulk upload some application data or some customer data to enrich that and do more of the analytics directly in the cloud. But a lot of our value is around collecting that data that's already outside the firewall, bringing together into one place. So even if you do intend to bring this up sort of behind your firewall and do more of your advanced analytics or bring it into your existing BI or analytics environment, it's a really good staging point. Because you don't want to bring a billion rows of day into your data warehouse, it's not cost effective. It's even difficult if you're planning to store that somewhere and then batch upload.


So we're often the first point where data is getting collected that's already outside firewall.


Eric: Yeah, that's a really good point, too. Because a lot of companies are going to be nervous about taking their proprietary customer data, putting it up in the cloud and to manage the whole process.


Hannah: Yeah.


Eric: And what you're talking about is really getting people a resource for crunching those heavy duty numbers of, as you suggest, data that's third party like mobile data and the social data and all that kind of fun stuff. That's pretty interesting.


Hannah: Yeah, absolutely. And probably they are nervous about the products because the data are already outside. And so yeah, before bringing it in, and I really like that refinery term, as I mentioned, versus the lake. So can you do some basic refinery? Get the good stuff out and then bring it behind the firewall into your other systems and processes for deeper analysis. So it's really all data scientists can do, real-time data exploration of this new big data that's flowing in.


Eric: Yeah, that's right. Well, let me go ahead and bring in our analysts and we'll kind of go back in reverse order. I'll start with you, Robin, with respect to Treasure Data and then we'll go to Kirk for some of the others. And then back to Robin and back to Kirk just to kind of get some more assessment of this.


And you know the data refinery, Robin, that Hannah is talking about here. I love that concept. I've heard only a few people talking about it that way but I do think that you certainly mentioned that before. And it really does speak to what is actually happening to your data. Because, of course, a refinery, it basically distills stuff down to its root level, if you think about oil refineries. I actually studied this for a while and it's pretty basic, but the engineering that goes into it needs to be exactly correct or you don't get the stuff that you want. So I think it's a great analogy. What do you think about this whole concept of the Treasure Data Cloud Service helping you tackle some of those very specific analytical needs without having to bring stuff in-house?


Robin: Well, I mean, obviously depending on the circumstances to how convenient that is. But anybody that's actually got already made process is already going to put you ahead of the game if you haven't got one yourself. This is the first takeaway for something like that. If somebody assembled something, they've done it, it's proven in the marketplace and therefore there's some kind of value in effect, well, the work is already gone into it. And there's also the very general fact that refining of data is going to be a much bigger issue than it ever was before. I mean, it is not talked about, in my opinion anyway, it's not talked about as much as it should be. Simply apart from the fact that size of the data has grown and the number of sources and the variety of those sources has grown quite considerably. And the reliability of the data in terms of whether it's clean, they need to disambiguate the data, all sorts of issues that rise just in terms of the governance of the data.


So before you actually get around to being able to do reliable analysis on it, you know, if your data's dirty, then your results will be skewed in some way or another. So that is something that has to be addressed, that has to be known about. And the triangulator of providing, as far as I can see, a very viable service to assist in that.


Eric: Yes, indeed. Well, let me go ahead and bring Kirk back into the equation here just real quickly. I wanted to take a look at one of these other slides and just kind of get your impression of things, Kirk. So maybe let's go back to this MarkLogic slide. And by the way, Kirk provided the link, if you didn't see it folks, to some of his class discovery slides because that's a very interesting concept. And I think this is kind of brewing at the back of my mind, Kirk, as I was talking about this a moment ago. This whole question that one of the attendees posed about how do you go about finding new classes. I love this topic because it really does speak to the sort of, the difficult side of categorizing things because I've always had a hard time categorizing stuff. I'm like, "Oh, god, I can fit in five categories, where do I put it?" So I just don't want to categorize anything, right?


And that's why I love search, because you don't have to categorize it, you don't have to put it in the folder. Just search for it and you'll find it if you know how to search. But if you're in that process of trying to segment, because that's basically what categorization is, it's segmenting; finding new classes, that's kind of an interesting thing. Can you kind of speak to the power of search and semantics and hierarchies, for example, as Frank was talking about with respect to MarkLogic and the role that plays in finding new classes, what do you think about that?


Kirk: Well, first of all, I'd say you are reading my mind. Because that was what I was thinking of a question even before you were talking, this whole semantic piece here that MarkLogic presented. And if you come back to my slide, you don't have to do this, but back on the slide five on what I presented this afternoon; I talked about this semantics that the data needs to be captured.


So this whole idea of search, there you go. I firmly believe in that and I've always believed in that with big data, sort of take the analogy of Internet, I mean, just the Web, I mean having the world knowledge and information and data on a Web browser is one thing. But to have it searchable and retrievable efficiently as one of the big search engine companies provide for us, then that's where the real power of discovery is. Because connecting the search terms, sort of the user interests areas to the particular data granule, the particular webpage, if you want to think the Web example or the particular document if you're talking about document library. Or a particular customer type of segment if that's your space.


And semantics gives you that sort of knowledge layering on top of just a word search. If you're searching for a particular type of thing, understanding that a member of a class of such things can have a certain relationship to other things. Even include that sort of relationship information and that's a class hierarchy information to find things that are similar to what you're looking for. Or sometimes even the exact opposite of what you're looking for, because that in a way gives you sort of additional core of understanding. Well, probably something that's opposite of this.


Eric: Yeah.


Kirk: So actually understand this. I can see something that's opposite of this. And so the semantic layer is a valuable component that's frequently missing and it's interesting now that this would come up here in this context. Because I've taught a graduate course in database, data mining, learning from data, data science, whatever you want to call it for over a decade; and one of my units in this semester-long course is on semantics and ontology. And frequently my students would look at me like, what does this have to do with what we're talking about? And of course at the end, I think we do understand that putting that data in some kind of a knowledge framework. So that, just for example, I'm looking for information about a particular customer behavior, understanding that that behavior occurs, that's what the people buy at a sporting event. What kind of products do I offer to my customers when I notice on their social media - on Twitter or Facebook - that they say they're going to a sporting event like football, baseball, hockey, World Cup, whatever it might be.


Okay, so sporting event. So they say they're going to, let's say, a baseball game. Okay, I understand that baseball is a sporting event. I understand that's usually a social and you go with people. I understand that it's usually in an outdoor space. I mean, understanding all those contextual features, it enables sort of, more powerful, sort of, segmentation of the customer involved and your sort of personalization of the experience that you're giving them when, for example, they're interacting with your space through a mobile app while they're sitting in a stadium.


So all that kind of stuff just brings so much more power and discovery potential to the data in that sort of indexing idea of indexing data granules by their semantic place and the knowledge space is really pretty significant. And I was really impressed that came out today. I think it's sort of a fundamental thing to talk.


Eric: Yeah, it sure is. It's very important in the discovery process, it's very important in the classification process. And if you think about it, Java works in classes. It's an object oriented, I guess, more or less, you could say form of programming and Java works in classes. So if you're actually designing software, this whole concept of trying to find new classes is actually pretty important stuff in terms of the functionality you're trying to deliver. Because especially in this new wild, wooly world of big data where you have so much Java out there running so many of these different applications, you know there are 87, 000 ways or more to get anything done with a computer, to get any kind of bit of functionality done.


One of my running jokes when people say, "Oh, you can build a data warehouse using NoSQL." I'm like, "well, you could, yeah, that's true. You could also build a data warehouse using Microsoft Word." It's not the best idea, it's not going to perform very well but you can actually do it. So the key is you have to find the best way to do something.


Go ahead.


Kirk: Let me just respond to that. It's interesting you mentioned the Java class example which didn't come into my mind until you said it. One of the aspects of Java and classes and that sort of object orientation is that there are methods that bind to specific classes. And this is really the sort of a message that I was trying to send in my presentation and that once you understand some of these data granules - these knowledge nuggets, these tags, these annotations and these semantic labels - then you can bind a method to that. They basically have this reaction or this response and have your system provide this sort of automated, proactive response to this thing the next time that we see it in the data stream.


So that concept of binding actions and methods to specific class is really one of the powers of automated real-time analytics. And I think that you sort of hit on something.


Eric: Good, good, good. Well, this is good stuff. So let's see, Will, I want to hand it back to you and actually throw a question to you from the audience. We got a few of those in here too. And folks, we're going long because we want to get some of these great concepts in these good questions.


So let me throw a question over to you from one of the audience numbers who's saying, "I'm not really seeing how business intelligence is distinguishing cause and effect." In other words, as the systems are making decisions based on observable information, how do they develop new models to learn more about the world? It's an interesting point so I'm hearing a cause-and-effect correlation here, root cause analysis, and that's some of that sort of higher-end stuff in the analytics that you guys talk about as opposed to traditional BI, which is really just kind of reporting and kind of understanding what happened. And of course, your whole direction, just looking at your slide here, is moving toward that predictive capability toward making those decisions or at least making those recommendations, right? So the idea is that you guys are trying to service the whole range of what's going on and you're understanding that the key, the real magic, is in the analytical goal component there on the right.


Will: Absolutely. I think that question is somewhat peering into the future, in the sense that data science, as I mentioned before, we saw the slide with the requirements of the data scientist; it's a pretty challenging role for someone to be in. They have to have that rich knowledge of statistics and science. You need to have the domain knowledge to apply your mathematical knowledge to the domains. So what we're seeing today is there aren't these out-of-the-box predictive tools that a business user, like, could pull up in Excel and automatically predict their future, right?


It does require that advanced knowledge in technology at this stage. Now someday in the future, it may be that some of these systems, these scale-out systems become sentient and start doing some wild stuff. But I would say at this stage, you still have to have a data scientist in the middle to continue to build models, not these models. These predictive models around data mining and such are highly tuned in and built by the data scientist. They're not generated on their own, if you know what I mean.


Eric: Yeah, exactly. That's exactly right. And one of my lines is "Machines don't lie, at least not yet."


Will: Not yet, exactly.


Eric: I did read an article - I have to write something about this - about some experiment that was done at a university where they said that these computer programs learned to lie, but I got to tell you, I don't really believe it. We'll do some research on that, folks.


And for the last comment, so Robin I'll bring you back in to take a look at this WebAction platform, because this is very interesting. This is what I love about a whole space is that you get such different perspectives and different angles taken by the various vendors to serve very specific needs. And I love this format for our show because we got four really interesting vendors that are, frankly, not really stepping on each others' toes at all. Because we're all doing different bits and pieces of the same overall need which is to use analytics, to get stuff done.


But I just want to get your perspective on this specific platform and their architecture. How they're going about doing things. I find it pretty compelling. Kaj misliš?


Robin: Well, I mean, it's pointed at extremely fast results from streaming data and as search, you have to architect for that. I mean, you're not going to get away with doing anything, amateurish, as we got any of that stuff. I hear this is extremely interesting and I think that one of the things that we witnessed over the past; I mean I think you and I, our jaw has been dropping more and more over the past couple of years as we saw more and more stuff emerge that was just like extraordinarily fast, extraordinarily smart and pretty much unprecedented.


This is obviously, WebAction, this isn't its first rodeo, so to speak. It's actually it's been out there taking names to a certain extent. So I don't see but supposed we should be surprised that the architecture is fairly switched but it surely is.


Eric: Well, I'll tell you what, folks. We burned through a solid 82 minutes here. I mean, thank you to all those folks who have been listening the whole time. If you have any questions that were not answered, don't be shy, send an email to yours truly. We should have an email from me lying around somewhere. And a big, big thank you to both our presenters today, to Dr. Kirk Borne and to Dr. Robin Bloor.


Kirk, I'd like to further explore some of that semantic stuff with you, perhaps in a future webcast. Because I do think that we're at the beginning of a very new and interesting stage now. What we're going to be able to leverage a lot of the ideas that the people have and make them happen much more easily because, guess what, the software is getting less expensive, I should say. It's getting more usable and we're just getting all this data from all these different sources. And I think it's going to be a very interesting and fascinating journey over the next few years as we really dig into what this stuff can do and how can it improve our businesses.


So big thank you to Techopedia as well and, of course, to our sponsors - Pentaho, WebAction, MarkLogic and Treasure Data. And folks, wow, with that we're going to conclude, but thank you so much for your time and attention. We'll catch you in about a month and a half for the next show. And of course, the briefing room keeps on going; radio keeps on going; all our other webcast series keep on rocking and rolling, folks. Najlepša hvala. We'll catch you next time. Adijo.

Kako lahko analitika izboljša poslovanje? - prepis iz 2. epizode