ABOUT US: Arque Capital is a FinTech startup working with AI in Finance in domains like Asset Management (Hedge Funds, ETFs and Structured Products), Robo Advisory, Bespoke Research, Alternate Brokerage, and other applications of Technology & Quantitative methods in Big Finance. PROFILE DESCRIPTION: 1. Get the "Tech" in order for the Hedge Fund - Help answer fundamentals of technology blocks to be used, choice of certain platform/tech over other, helping team visualize product with the available resources and assets 2. Build, manage, and validate a Tech Roadmap for our Products 3. Architecture Practices - At startups, the dynamics changes very fast. Making sure that best practices are defined and followed by team is very important. CTO’s may have to garbage guy and clean the code time to time. Making reviews on Code Quality is an important activity that CTO should follow. 4. Build progressive learning culture and establish predictable model of envisioning, designing and developing products 5. Product Innovation through Research and continuous improvement 6. Build out the Technological Infrastructure for the Hedge Fund 7. Hiring and building out the Technology team 8. Setting up and managing the entire IT infrastructure - Hardware as well as Cloud 9. Ensure company-wide security and IP protection REQUIREMENTS: Computer Science Engineer from Tier-I colleges only (IIT, IIIT, NIT, BITS, DHU, Anna University, MU) 5-10 years of relevant Technology experience (no infra or database persons) Expertise in Python and C++ (3+ years minimum) 2+ years experience of building and managing Big Data projects Experience with technical design & architecture (1+ years minimum) Experience with High performance computing - OPTIONAL Experience as a Tech Lead, IT Manager, Director, VP, or CTO 1+ year Experience managing Cloud computing infrastructure (Amazon AWS preferred) - OPTIONAL Ability to work in an unstructured environment Looking to work in a small, start-up type environment based out of Mumbai COMPENSATION: Co-Founder status and Equity partnership
We invest in people and the ideas they bring to the table instead of spending fortunes marketing a half baked product. The spirit of innovation and constant urge to help the average online shopper save big has helped us become profitable in less than 6 months of operation. With over 6 Million Unique Monthly visitors, we are India’s #1 coupons and deals website. We're looking a Business Analyst who takes high degree of ownership and enjoys: Studying business functions, document and maintain system processes across different functions of the Business.; Gathering information; setup processes and, monitor progress by tracking project activity and publishing progress reports. Responsibilities Include: Defining configuration specifications and business analysis requirements Performing quality assurance Defining reporting and alerting requirements Perform quality assurance Define reporting and alerting requirements Communicate key insights and findings to product team Requirements: Previous experience in problem-solving and insight-driven decision making A degree in engineering (preferably IT/CS) and have worked on web product(s) Should be able to work independently Basic knowledge in generating process documentation Strong written and verbal communication skills including technical writing skills
Your Role: · As an integral part of the Data Engineering team, be involved in the entire development lifecycle from conceptualization to architecture to coding to unit testing · Build realtime and batch analytics platform for analytics & machine-learning · Design, propose and develop solutions keeping the growing scale & business requirements in mind · Help us design the Data Model for our data warehouse and other data engineering solutions Must Have: · Understands Data very well and has extensive Data Modelling experience · Deep understanding of real-time as well as batch processing big data technologies (Spark, Storm, Kafka, Flink, MapReduce, Yarn, Pig, Hive, HDFS, Oozie etc) · Experience developing applications that work with NoSQL stores (e.g., ElasticSearch, Hbase, Cassandra, MongoDB, CouchDB) · Proven programming experience in Java or Scala · Experience in gathering and processing raw data at scale including writing scripts, web scraping, calling APIs, writing SQL queries, etc · Experience in cloud based data stores like Redshift and Big Query is an advantage Bonus: · Love sports – especially cricket and football · Have worked previously in a high-growth tech startup
Your Role: • You will lead the strategy, planning, and engineering for Data at Dream11 • Build a robust realtime & batch analytics platform for analytics & machine-learning • Design and develop the Data Model for our data warehouse and other data engineering solutions • Collaborate with various departments to develop, maintain a data platform solution and recommend emerging technologies for data storage, processing and analytics MUST have: • 9+ years of experience in data engineering, data modelling, schema design and 5+ years of programming expertise in Java or Scala • Understanding of real-time as well as batch processing big data technologies (Spark, Storm, Kafka, Flink, MapReduce, Yarn, Pig, Hive, HDFS, Oozie etc) • Developed applications that work with NoSQL stores (e.g. ElasticSearch, Hbase, Cassandra, MongoDB, CouchDB) • Experience in gathering and processing raw data at scale including writing scripts, web scraping, calling APIs, writing SQL queries, etc • Bachelor/Master in Computer Science/Engineering or related technical degree Bonus: • Experience in cloud based data stores like Redshift and Big Query is an advantage • Love sports – especially cricket and football • Have worked previously in a high-growth tech startup
Your Role: · Find insights that influence decisions (particularly new product/feature ideas). This work will span from early data explorations about user behavior, to multivariate experiments and optimizations · Lead the structure and analysis of A/B tests, help size opportunities to prioritize features · Work very closely with with other teams of Dream11 for requirement gathering and analysis · Analyze and provide recommendations to influence our product and company strategy · Provide user insights through data: cohorts analyses, user segmentation, long-term trends, and behavioural analyses MUST have: · Advanced problem solving skills · Expertise in deriving insights from large datasets across multiple sources and are fundamentally strong in data analytic and statistics · Understanding of key analytical techniques used in consumer web services including funnel analysis, conversion & drop-off rates, and cohort analysis · Experience in BI Tools like Tableau, Looker, PowerBI, Qlikview, Spotfire, etc. Bonus: · Have experience with statistical analysis languages, e.g., R, MATLAB · Die-hard Cricket and/or Football Fans
The hunt is for a AWS BigData /DWH Architect with the ability to manage effective relationships with a wide range of stakeholders (customers & team members alike). Incumbent will demonstrate personal commitment and accountability to ensure standards are continuously sustained and improved both within the internal teams, and with partner organizations and suppliers. We at Nitor Infotech a Product Engineering Services company are always on hunt for some best talents in the IT industry & keeping with our trend of What next in IT. We are scouting for result oriented resources with passion for product, technology services, and creating great customer experiences. Someone who can take our current expertise & footprint of Nitor Infotech Inc., to an altogether different dimension & level in tune with the emerging market trends and ensure Brilliance @ Work continues to prevail in whatever we do. Nitor Infotech works with global ISVs to help them build and accelerate their product development. Nitor is able to do so because of the fact that product development is its DNA. This DNA is enriched by its 10 years of expertise, best practices and frameworks & Accelerators. Because of this ability Nitor Infotech has been able to build business relationships with product companies having revenues from $50 Million to $1 Billion. • 7-12+ years of relevant experience of working in Database, BI and Analytics space with over 0-2 yrs of architecting and designing data warehouse experience including 2 to 3 yrs in Big Data ecosystem • Experience in data warehouse design in AWS • Strong architecting, programming, design skills and proven track record of architecting and building large scale, distributed big data solutions • Professional and technical advice on Big Data concepts and technologies, in particular highlighting the business potential through real-time analysis • Provides technical leadership in Big Data space (Hadoop Stack like M/R, HDFS, Pig, Hive, HBase, Flume, Sqoop, etc. NoSQL stores like Mongodb, Cassandra, HBase etc.) • Performance tuning of Hadoop clusters and Hadoop MapReduce routines. • Evaluate and recommend Big Data technology stack for the platform • Drive significant technology initiatives end to end and across multiple layers of architecture • Should have breadth of BI knowledge which includes: MSBI, Database design, new visualization tools like Tableau, Qlik View, Power BI Understand internals and intricacies of Old and New DB platform which includes: Strong RDMS DB Fundamentals either of it SQL Server/ MySQL/ Oracle DB and DWH design Designing Semantic Model using OLAP and Tabular model using MS and Non MS tools No SQL DBs including Document, Graph, Search and Columnar DBs • Excellent communication skills and strong ability to build good rapport with prospect and existing customers • Be a Mentor and go to person for Jr. team members in the team Qualification & Experience: · Educational qualification: BE/ME/B.Tech/M.Tech, BCA/MCA/BCS/MCS, any other degree with relevant IT qualification.
The role is for a large insurance engagement , the Individual would work with the client to identify , comprehend and solve using Statistical approaches using R studio. The role involves data analytics , Statistical Modelling . Tools involve R & SQL
• Good knowledge on Project Planning and Project Tracking at macro level Strong ability to manage and execute on multiple project timelines and priorities • Experience with Scrum methodology and Agile practices. Should have experience to JIRA • Strong leadership, project management skills, time management, and problem solving skills • Experience in customer handling and should have working in different project phases. • Experience in requirement gather and task mapping. • Experience in effort estimation, project reporting and status tracking. • Should have experience in project level budget control, budget and resource management. Internal project status reporting and customer project status reporting. • Experience in Business Intelligence (BI), Data warehouse (DW) • Clear understanding of Cube, OLAP, OLTP concepts • Hands-on experience of Schema design and data modeling • Should have clear understanding of DFD and ERD • Ability to translate complex data flows/transformations into sequences of ETL implementation tasks • Thorough Knowledge of ETL, data quality, data cleansing, and data blending tools • Familiarity with AWS Data Pipeline a plus • Proven ability to write and tune SQL queries, Understanding of columnar DBs like Redshift and Actian Vector, and conventional RDBMS like SQL Server, MS SQL, MySQL, etc. • Knowledge of one or more OLAP (e.g. Mondrian), reporting and visualization tools (e.g. Tableau, Spotfire, Jaspersoft, QlikView, Looker, DataWatch, Kibana, Apache Zeppelin etc) • Ability to understand domain and gain business knowledge quickly • If Python then, candidate must have used libraries like, Pandas, SciPy, NumPy, Twithon, TwiPy, Anaconda Distribution, etc. • Experience in one more structured DBMS: MySQL, SQL Server, Redshift, Postgres • Experience in one or more: Dynamo DB, Mongo DB, Cassandra • Experience of Ingestion, Processing and Visualization of “Big Data” – More than 500 GB, >1 TB is preferred • Hands-on with Hadoop components like AWS EMR or HDFS, Sqoop, Oozie, HBase, Hive, MapReduce, etc. It is not possible that candidate have all the skills at a time. But these are the basic skills around the Data Engineer position. The candidate should at least be theoretically proficient in all of the above. • Working knowledge on one or more: Kafka, Redis, Spark, Solr, ElasticSearch, Storm, Kinesis Strong analytical skills and good problem solving skills. • Experience in handling large teams, with good interpersonal skills • Self-motivated, team player, action-and-results oriented • Well organized, good communication and reporting skills • Ability to successfully work under tight project deadlines.