We’re finding better ways for cities to move, work, and thrive. Download the app and get a ride in minutes. Or become a driver and earn money on your schedule.
JOB REQUIREMENTS: Minimum 3-5 Years of Experience in Software Development Strong Fundamentals of Data Structures and Algorithms Experience in python/Cassandra/spark/MongoDB/ Kafka, ActiveMQ Understanding of microservice or distributed architecture Understanding of async programming. Knowledge in handling messaging services like pubsub or streaming (Eg: Kafka, ActiveMQ, RabbitMQ) Understanding of end to end development including deployment and monitoring. Worked with SQL & NoSQL databases. Has good debugging skills Has good analytical & problem-solving skills
Sr Data Engineer Job Description About Us DataWeave is a Data Platform which aggregates publicly available data from disparate sources and makes it available in the right format to enable companies take strategic decisions using trans-firewall Analytics. It's hard to tell what we love more, problems or solutions! Every day, we choose to address some of the hardest data problems that there are. We are in the business of making sense of messy public data on the web. At serious scale! Requirements: - Building an intelligent and highly scalable crawling platform - Data extraction and processing at scale - Enhancing existing data stores/data models - Building a low latency API layer for serving data to power Dashboards, Reports, and Analytics functionality - Constantly evolving our data platform to support new features Expectations: - 4+ years of relevant industry experience. - Strong in algorithms and problem solving Skills - Software development experience in one or more general purpose programming languages (e.g. Python, C/C++, Ruby, Java, C#). - Exceptional coding abilities and experience with building large-scale and high-availability applications. - Experience in search/information retrieval platforms like Solr, Lucene and ElasticSearch. - Experience in building and maintaining large scale web crawlers. - In Depth knowledge of SQL and and No-Sql datastore. - Ability to design and build quick prototypes. - Experience in working on cloud based infrastructure like AWS, GCE. Growth at DataWeave - Fast paced growth opportunities at dynamically evolving start-up. - You have the opportunity to work in many different areas and explore wide variety of tools to figure out what really excites you.
Couture.ai is building a patent-pending AI platform targeted towards vertical-specific solutions. The platform is already licensed by Reliance Jio and few European retailers, to empower real-time experiences for their combined >200 million end users. For this role, credible display of innovation in past projects (or academia) is a must. We are looking for a candidate who lives and talks Data & Algorithms, love to play with BigData engineering, hands-on with Apache Spark, Kafka, RDBMS/NoSQL DBs, Big Data Analytics and handling Unix & Production Server. Tier-1 college (BE from IITs, BITS-Pilani, top NITs, IIITs or MS in Stanford, Berkley, CMU, UW–Madison) or exceptionally bright work history is a must. Let us know if this interests you to explore the profile further.
We are looking for developers with 4-8 years experience in Java and Big data Work Location - Bangalore Qualification - Any graduate or Post graduate from tier 1 and tier 2 colleges. Strong programming skills in Java and Big Data Hands on experience with Big Data systems like Spark, HDFS, Hive, Kafka Excellent written and verbal communication skills with the ability to communicate the design of algorithms and systems to other members of the group Interested candidates share the resume to Shyam.firstname.lastname@example.org Shyam Sugathan Senior Recruiter- Human Resources Shyam.email@example.com Floor 2nd | Noor Complex | Mavoor Road | Calicut-4 | Kerala | India. | www.hapstive.com
The hunt is for a AWS BigData /DWH Architect with the ability to manage effective relationships with a wide range of stakeholders (customers & team members alike). Incumbent will demonstrate personal commitment and accountability to ensure standards are continuously sustained and improved both within the internal teams, and with partner organizations and suppliers. We at Nitor Infotech a Product Engineering Services company are always on hunt for some best talents in the IT industry & keeping with our trend of What next in IT. We are scouting for result oriented resources with passion for product, technology services, and creating great customer experiences. Someone who can take our current expertise & footprint of Nitor Infotech Inc., to an altogether different dimension & level in tune with the emerging market trends and ensure Brilliance @ Work continues to prevail in whatever we do. Nitor Infotech works with global ISVs to help them build and accelerate their product development. Nitor is able to do so because of the fact that product development is its DNA. This DNA is enriched by its 10 years of expertise, best practices and frameworks & Accelerators. Because of this ability Nitor Infotech has been able to build business relationships with product companies having revenues from $50 Million to $1 Billion. • 7-12+ years of relevant experience of working in Database, BI and Analytics space with over 0-2 yrs of architecting and designing data warehouse experience including 2 to 3 yrs in Big Data ecosystem • Experience in data warehouse design in AWS • Strong architecting, programming, design skills and proven track record of architecting and building large scale, distributed big data solutions • Professional and technical advice on Big Data concepts and technologies, in particular highlighting the business potential through real-time analysis • Provides technical leadership in Big Data space (Hadoop Stack like M/R, HDFS, Pig, Hive, HBase, Flume, Sqoop, etc. NoSQL stores like Mongodb, Cassandra, HBase etc.) • Performance tuning of Hadoop clusters and Hadoop MapReduce routines. • Evaluate and recommend Big Data technology stack for the platform • Drive significant technology initiatives end to end and across multiple layers of architecture • Should have breadth of BI knowledge which includes: MSBI, Database design, new visualization tools like Tableau, Qlik View, Power BI Understand internals and intricacies of Old and New DB platform which includes: Strong RDMS DB Fundamentals either of it SQL Server/ MySQL/ Oracle DB and DWH design Designing Semantic Model using OLAP and Tabular model using MS and Non MS tools No SQL DBs including Document, Graph, Search and Columnar DBs • Excellent communication skills and strong ability to build good rapport with prospect and existing customers • Be a Mentor and go to person for Jr. team members in the team Qualification & Experience: · Educational qualification: BE/ME/B.Tech/M.Tech, BCA/MCA/BCS/MCS, any other degree with relevant IT qualification.
Looking for Big data Developers in Mumbai Location
Your Role: · As an integral part of the Data Engineering team, be involved in the entire development lifecycle from conceptualization to architecture to coding to unit testing · Build realtime and batch analytics platform for analytics & machine-learning · Design, propose and develop solutions keeping the growing scale & business requirements in mind · Help us design the Data Model for our data warehouse and other data engineering solutions Must Have: · Understands Data very well and has extensive Data Modelling experience · Deep understanding of real-time as well as batch processing big data technologies (Spark, Storm, Kafka, Flink, MapReduce, Yarn, Pig, Hive, HDFS, Oozie etc) · Experience developing applications that work with NoSQL stores (e.g., ElasticSearch, Hbase, Cassandra, MongoDB, CouchDB) · Proven programming experience in Java or Scala · Experience in gathering and processing raw data at scale including writing scripts, web scraping, calling APIs, writing SQL queries, etc · Experience in cloud based data stores like Redshift and Big Query is an advantage Bonus: · Love sports – especially cricket and football · Have worked previously in a high-growth tech startup
Your Role: • You will lead the strategy, planning, and engineering for Data at Dream11 • Build a robust realtime & batch analytics platform for analytics & machine-learning • Design and develop the Data Model for our data warehouse and other data engineering solutions • Collaborate with various departments to develop, maintain a data platform solution and recommend emerging technologies for data storage, processing and analytics MUST have: • 9+ years of experience in data engineering, data modelling, schema design and 5+ years of programming expertise in Java or Scala • Understanding of real-time as well as batch processing big data technologies (Spark, Storm, Kafka, Flink, MapReduce, Yarn, Pig, Hive, HDFS, Oozie etc) • Developed applications that work with NoSQL stores (e.g. ElasticSearch, Hbase, Cassandra, MongoDB, CouchDB) • Experience in gathering and processing raw data at scale including writing scripts, web scraping, calling APIs, writing SQL queries, etc • Bachelor/Master in Computer Science/Engineering or related technical degree Bonus: • Experience in cloud based data stores like Redshift and Big Query is an advantage • Love sports – especially cricket and football • Have worked previously in a high-growth tech startup
Bigdata, Business intelligence , python, R with their skills
We are an early stage startup working in the space of analytics, big data, machine learning, data visualization on multiple platforms and SaaS. We have our offices in Palo Alto and WTC, Kharadi, Pune and got some marque names as our customers. We are looking for really good Python programmer who MUST have scientific programming experience (Python, etc.) Hands-on with numpy and the Python scientific stack is a must. Demonstrated ability to track and work with 100s-1000s of files and GB-TB of data. Exposure to ML and Data mining algorithms. Need to be comfortable working in a Unix environment and SQL. You will be required to do following: Using command line tools to perform data conversion and analysis Supporting other team members in retrieving and archiving experimental results Quickly writing scripts to automate routine analysis tasks Creating insightful, simple graphics to represent complex trends Explore/design/invent new tools and design patterns to solve complex big data problems Experience working on a long-term, lab-based project (academic experience acceptable)