Couture.ai is building a patent-pending AI platform targeted towards vertical-specific solutions. The platform is already licensed by Reliance Jio and few European retailers, to empower real-time experiences for their combined >200 million end users. The founding team consists of BITS Pilani alumni with experience of creating global startup success stories. The core team, we are building, consists of some of the best minds in India in artificial intelligence research and data engineering. We are looking for multiple different roles with 2-7 year of research/large-scale production implementation experience with: - Rock-solid algorithmic capabilities. - Production deployments for massively large-scale systems, real-time personalization, big data analytics, and semantic search. - Or credible research experience in innovating new ML algorithms and neural nets. Github profile link is highly valued. For right fit into the Couture.ai family, compensation is no bar.
Couture.ai is building a patent-pending AI platform targeted towards vertical-specific solutions. The platform is already licensed by Reliance Jio and few European retailers, to empower real-time experiences for their combined >200 million end users. For this role, credible display of innovation in past projects is a must. We are looking for hands-on leaders in data engineering with the 5-11 year of research/large-scale production implementation experience with: - Proven expertise in Spark, Kafka, and Hadoop ecosystem. - Rock-solid algorithmic capabilities. - Production deployments for massively large-scale systems, real-time personalization, big data analytics and semantic search. - Expertise in Containerization (Docker, Kubernetes) and Cloud Infra, preferably OpenStack. - Experience with Spark ML, Tensorflow (& TF Serving), MXNet, Scala, Python, NoSQL DBs, Kubernetes, ElasticSearch/Solr in production. Tier-1 college (BE from IITs, BITS-Pilani, IIITs, top NITs, DTU, NSIT or MS in Stanford, UC, MIT, CMU, UW–Madison, ETH, top global schools) or exceptionally bright work history is a must. Let us know if this interests you to explore the profile further.
Couture.ai is building a patent-pending AI platform targeted towards vertical-specific solutions. The platform is already licensed by Reliance Jio and few European retailers, to empower real-time experiences for their combined >200 million end users. For this role, credible display of innovation in past projects (or academia) is a must. We are looking for a candidate who lives and talks Data & Algorithms, love to play with BigData engineering, hands-on with Apache Spark, Kafka, RDBMS/NoSQL DBs, Big Data Analytics and handling Unix & Production Server. Tier-1 college (BE from IITs, BITS-Pilani, top NITs, IIITs or MS in Stanford, Berkley, CMU, UW–Madison) or exceptionally bright work history is a must. Let us know if this interests you to explore the profile further.
Roles and Responsibilities ● Collaborating as part of a cross-functional Agile team to create and enhance software that enables state of the art, next-generation Big Data & Fast Data applications. ● Developing and deploying distributed computing Big Data applications using Open Source frameworks like Apache Spark, Beam, Nifi, Storm, and Kafka ● Utilizing programming languages like Java, Python, and Open Source RDBMS and NoSQL databases. ● Utilizing Hadoop modules such as YARN & MapReduce, and related Apache projects such as Hive, HBase. ● Leveraging DevOps techniques and practices like Continuous Integration, Continuous Deployment, Test Automation, Build Automation and Test Driven Development to enable the rapid delivery of working code utilizing tools like Jenkins, Maven, Git, and Docker. ● Performing unit tests and conducting reviews with other team members to make sure your code is rigorously designed, elegantly coded, and effectively tuned for performance Enable the rapid delivery of working code utilizing tools like Jenkins, Maven, Git, and Docker Critical Functional Skills ● Understanding of Big Data & Fast Data applications ● Computing Big Data applications using Open Source frameworks like Apache Spark, Beam, Nifi, Storm, and Kafka ● Use of Hadoop modules such as YARN & MapReduce, and related Apache projects such as Hive, HBase. Leveraging DevOps techniques and practices like Continuous Integration, Continuous Deployment, Test Automation, Build Automation and Test Driven Development. Experience Required ● At least 6 years of professional work experience in data platforms such as HDP, Cloudera, EMR. ● At least 4 years of experience in open source programming languages for large scale data analysis. ● At least 4 years of Java development for modern data engineering. ● At least 2+ year of experience working with Stream processing solution on Kafka/Flink/Spark Streaming/Key-Value Stores. Minimum Qualifications Required: B.Tech./ M.Tech. in Computer Science or related technical discipline (or equivalent).
Roles and Responsibilities:• Responsible for developing and maintaining applications with PySpark • Contribute to the overall design and architecture of the application developed and deployed. • Performance Tuning wrt to executor sizing and other environmental parameters, code optimization, partitions tuning, etc. • Interact with business users to understand requirements and troubleshoot issues. • Implement Projects based on functional specifications.Must have Skills: • Good experience in Pyspark - Including Dataframe core functions and Spark SQL • Good experience in SQL DBs - Be able to write queries including fair complexity. • Should have excellent experience in Big Data programming for data transformation and aggregations • Good at ETL architecture. Business rules processing and data extraction from Data Lake into data streams for business consumption. • Good customer communication. • Good Analytical skills
Role: Data Engineer Company: PayU Location: Bangalore/ Mumbai Experience : 2-5 yrs About Company:PayU is the payments and fintech business of Prosus, a global consumer internet group and one of the largest technology investors in the world. Operating and investing globally in markets with long-term growth potential, Prosus builds leading consumer internet companies that empower people and enrich communities. The leading online payment service provider in 36 countries, PayU is dedicated to creating a fast, simple and efficient payment process for merchants and buyers. Focused on empowering people through financial services and creating a world without financial borders where everyone can prosper, PayU is one of the biggest investors in the fintech space globally, with investments totalling $700 million- to date. PayU also specializes in credit products and services for emerging markets across the globe. We are dedicated to removing risks to merchants, allowing consumers to use credit in ways that suit them and enabling a greater number of global citizens to access credit services. Our local operations in Asia, Central and Eastern Europe, Latin America, the Middle East, Africa and South East Asia enable us to combine the expertise of high growth companies with our own unique local knowledge and technology to ensure that our customers have access to the best financial services. India is the biggest market for PayU globally and the company has already invested $400 million in this region in last 4 years. PayU in its next phase of growth is developing a full regional fintech ecosystem providing multiple digital financial services in one integrated experience. We are going to do this through 3 mechanisms: build, co-build/partner; select strategic investments. PayU supports over 350,000+ merchants and millions of consumers making payments online with over 250 payment methods and 1,800+ payment specialists. The markets in which PayU operates represent a potential consumer base of nearly 2.3 billion people and a huge growth potential for merchants. Job responsibilities: Design infrastructure for data, especially for but not limited to consumption in machine learning applications Define database architecture needed to combine and link data, and ensure integrity across different sources Ensure performance of data systems for machine learning to customer-facing web and mobile applications using cutting-edge open source frameworks, to highly available RESTful services, to back-end Java based systems Work with large, fast, complex data sets to solve difficult, non-routine analysis problems, applying advanced data handling techniques if needed Build data pipelines, includes implementing, testing, and maintaining infrastructural components related to the data engineering stack. Work closely with Data Engineers, ML Engineers and SREs to gather data engineering requirements to prototype, develop, validate and deploy data science and machine learning solutions Requirements to be successful in this role: Strong knowledge and experience in Python, Pandas, Data wrangling, ETL processes, statistics, data visualisation, Data Modelling and Informatica. Strong experience with scalable compute solutions such as in Kafka, Snowflake Strong experience with workflow management libraries and tools such as Airflow, AWS Step Functions etc. Strong experience with data engineering practices (i.e. data ingestion pipelines and ETL) A good understanding of machine learning methods, algorithms, pipelines, testing practices and frameworks Preferred) MEng/MSc/PhD degree in computer science, engineering, mathematics, physics, or equivalent (preference: DS/ AI) Experience with designing and implementing tools that support sharing of data, code, practices across organizations at scale
Hi!!Hope you are having a great day today!!We are on the lookout for core programmers who specialise into Spark and programming experience into Scala/Python/Java. This is for a Technical Lead Role and we would expect an apt candidate who is really strong with Spark and a very good programmer and cloud experience into AWS/Azure. If you find this matching to your profile, do apply. I will reach out to you as soon as possible.Stay Safe!!
Be an integral part of large scale client business development and delivery engagements Develop the software and systems needed for end-to-end execution on large projects Work across all phases of SDLC, and use Software Engineering principles to build scaled solutions Build the knowledge base required to deliver increasingly complex technology projectsObject-oriented languages (e.g. Python, PySpark, Java, C#, C++ ) and frameworks (e.g. J2EE or .NET) Database programming using any flavours of SQL Expertise in relational and dimensional modelling, including big data technologies Exposure across all the SDLC process, including testing and deployment Expertise in Microsoft Azure is mandatory including components like Azure Data Factory, Azure Data Lake Storage, Azure SQL, Azure DataBricks, HD Insights, ML Service etc. Good knowledge of Python and Spark are required Good understanding of how to enable analytics using cloud technology and ML Ops Experience in Azure Infrastructure and Azure Dev Ops will be a strong plus
Why are we building Urbancomapny? Organized service commerce is a large yet young industry in India. While India is a very large market for a home and local services (~USD 50 Billion in retail spends) and expected to double in the next 5 years, there is no billion-dollar company in this segment today. The industry is bare ~20 years old, with a sub-optimal market architecture typical of an unorganized market - fragmented supply side operated by middlemen. As a result, experiences are broken for both customers and service professionals, each largely relying upon word of mouth to discover the other. The industry can easily be 1.5-2x larger than it is today if the frictions in user and professional's journeys are removed - and the experiences made more meaningful and joyful. The Urban Company team is young and passionate, and we see a massive disruption opportunity in his industry. By leveraging technology, and a set of simple yet powerful processes, we wish to build a platform that can organize the world of services - and bring them to your finger-tips. We believe there is the immense value (akin to serendipity) in bringing together customers and professionals looking for each other. In the process, we hope to impact the lives of millions of service entrepreneurs, and transform service commerce they way Amazon transformed product commerce. Job Description : Urbancompany has grown 3x YOY and so as our tech stack. We have evolved in data-driven approach solving for the product over the last few years. We deal with around 10TB in data analytics with around 50Mn/day. We adopted platform thinking pretty at the very early stage of UC. We started building central platform teams who are dedicated solve for core engineering problems around 2-3 years ago and now it has evolved to a full-fledged vertical. Out platform vertical majorly includes Data Engineering, Service and Core Platform, Infrastructure, and Security. We are looking for an Engineering Manager for the Data Engineering team currently. A person who loves solving standardization, have strong platform thinking, opinions, have solved for Data Engineering, Data Science and analytics platform. Job Responsibilities Building high octane teams with high opinions and strong platform thinking Working on complex design and architectural problems. Solving funnel analytics, product insights and building a highly scalable data platform Experience in building Data Science Platform Highly productive data-driven models to contribute to product success and building Visioning out the roadmap and thought process behind taking current tech stack to next level Building and maintaining the high NPS of 70% of platform products Strong decision-maker with hands-on experience Think about abstractions, systems and services and write high-quality code. Have an understanding of loopholes in current systems/architecture that can potentially break in the future and push towards solving them with other stakeholders. Think through complex architecture to build robust platforms to serve together all the categories and flows, solve for scale, and work on internally build services to cater to our growing needs. Job Requirements At least 1-2+ Years of experience in managing teams 5-8 years of experience in the industry solving complex problems from scratch and have graduate/post-graduate degrees from top-tier universities. A thinker with strong opinions and the ability to get those opinions into reality Prior experience of creating complex systems in the past. Ability to build scalable, sustainable, reliable, and secure products based on past experience and leading teams and projects by themselves. Ability to bring new practices, architectural choices, and new initiatives onto the table to make the overall tech stack more robust. History and familiarity with server-side architecture based on APIs, databases, infrastructure, and systems. Ability to own the technical road map for systems/components. What can you expect? A phenomenal work environment, with massive ownership and growth opportunities. A high performance, high-velocity environment at the cutting edge of growth. Strong ownership expectation and freedom to fail. Quick iterations and deployments – fail-fast attitude. Opportunity to work on cutting edge technologies. The massive, and direct impact of the work you do on the lives of people.
Junior Data Scientist- Happymonk is at the forefront of digital reinvention, helping clients reimagine how they serve their connected customers and operate enterprises. We are looking for an experienced AI specialist to join the revolution, using deep learning, neuro-linguistic programming (NLP) computer vision, chatbots, and robotics to help us improve various business outcomes and drive innovation. You will join a multidisciplinary team helping to shape our AI strategy and showcasing the potential for AI through early-stage solutions. This is an excellent opportunity to take advantage of emerging trends and technologies to a real-world difference. As you mine, interpret and clean our data, we will rely on you to ask questions, connect the dots, and uncover opportunities that lie hidden within—all with the ultimate goal of realizing the data’s full potential. You will join a team of data specialists, but will “slice and dice” data using your own methods, creating new visions for the future. A data scientist knows how to extract meaning from and interpret data. This unique skill set requires the aid of statistical methods and machinery, but largely relies on analytical brainpower. Because raw data can rarely be utilized reliably, businesses in a variety of industries look to these technical experts to collect, clean, and validate their data. This meticulous process requires persistence and software engineering skills—expertise that’s integral to understanding data bias and to debug output from code. In simpler terms, data scientists find patterns and use the knowledge to build and improve. An artificial intelligence (AI) specialist applies their skills in engineering and computer science to create machines and software programs that can think for themselves. Most often, they use AI principles to address persistent business pain points, augment the capability of technical and human resources, and execute a change management/transformation process. The key contribution of an AI specialist is using emerging technologies, such as machine learning (ML) and neuro-linguistic programming (NLP), to solve business problems in new and creative ways that provide greater insight, accuracy, and consistency. This can be done in nearly any industry but is most often sought by the government, health care, and higher education institutions. The objective of this role Manage and direct research and development (R&D) and processes to meet the needs of our AI strategy Understand company and client challenges and how integrating AI capabilities can help create solutions Lead cross-functional teams to identify and prioritize key areas of the partner’s business where AI solutions can drive significant business benefit Analyze and explain AI and machine learning (ML) solutions while setting and maintaining high ethical standards Collaborate with product design and engineering to develop an understanding of needs Research and devise innovative statistical models for data analysis Communicate findings to all stakeholders Enable smarter business processes—and implement analytics for meaningful insights Keep current with technology and industry developments Daily and Monthly Responsibilities Advise C-suite and business leaders on a broad range of technology, strategy, and policy issues associated with AI Work on functional design, process design (including scenario design, flow mapping), prototyping, testing, training, and defining support procedures, working with an advanced engineering team and executive leadership Manage a team to conduct assessments of the AI and automation market and competitive landscape Serve as liaison between stakeholders and project teams, delivering feedback and enabling them to make necessary changes to product performance or presentation Work as the lead data strategist, identifying and integrating new datasets that can be leveraged through our product capabilities and work closely with the engineering team to strategize and execute the development of data products Execute analytical experiments methodically to help solve various problems and make a true impact across various domains and industries Identify relevant data sources and sets to mine for client business needs, and collect large structured and unstructured datasets and variables Devise and utilize algorithms and models to mine big data stores, perform data and error analysis to improve models, and clean and validate data for uniformity and accuracy Analyze data for trends and patterns, and Interpret data with a clear objective in mind Implement analytical models into production by collaborating with software developers and machine learning engineers. Communicate analytic solutions to stakeholders and implement improvements as needed to operational systems Bachelor’s degree in computer science Special Skills Required 1+ years of experience in applying AI to practical and comprehensive technology solutions 2+ years of experience in data science Proven experience with ML, deep learning, Python, NLP, Kafka, and Spark Experience with program leadership, governance, and change enablement Knowledge of basic algorithms, object-oriented and functional design principles, and best practice patterns Experience with REST API development, SQL design, RDBMS design and optimizations Experience with innovation accelerators Experience with Cloud environments Bachelor’s degree in statistics, applied mathematics, or related discipline preferred Proficiency in data mining, mathematics, and statistical analysis Advanced pattern recognition and predictive modeling experience Experience with Excel, PowerPoint, Tableau, SQL, and programming languages (i.e., Java/Python, SAS) Comfort working in a dynamic, research-oriented group with several ongoing concurrent projects Professional Certification Location Bengaluru, Karnataka, India - 560072 Compensation Key perks Working with the best in the industry Family Insurance Program Wellness Benefits Work from home Industries Internet Technology, Research and development, AI/ML, Architecture, Security & Surveillance, Wellness, Construction & Allied Industries
Skills Requirements Knowledge of Hadoop ecosystem installation, initial-configuration and performance tuning. Expert with Apache Ambari, Spark, Unix Shell scripting, Kubernetes and Docker Knowledge on python would be desirable. Experience with HDP Manager/clients and various dashboards. Understanding on Hadoop Security (Kerberos, Ranger and Knox) and encryption and Data masking. Experience with automation/configuration management using Chef, Ansible or an equivalent. Strong experience with any Linux distribution. Basic understanding of network technologies, CPU, memory and storage. Database administration a plus.Qualifications and Education Requirements 2 to 4 years of experience with and detailed knowledge of Core Hadoop Components solutions anddashboards running on Big Data technologies such as Hadoop/Spark. Bachelor degree or equivalent in Computer Science or Information Technology or related fields.
Must have Skills: Extract and present valuable information from data Understand business requirements and generate insights Build mathematical models, validate and work with them Explain complex topics tailored to the audience Validate and follow up on results Work with large and complex data sets Establish priorities with clear goals and responsibilities to achieve a high level of performance. Work in an agile and iterative manner on solving problems Evaluate different options proactively and the ability to solve problems in an innovative way. Develop new solutions or combine existing methods to create new approaches. Good understanding of Digital & analytics Strong communication skills, orally and in writing Job Overview:As a Data Scientist you will work in collaboration with our business and engineering people, on creating value from data. Often the work requires solving complex problems by turning vast amounts of data into business insights through advanced analytics, modeling and machine learning. You have a strong foundation in analytics, mathematical modeling, computer science, and math - coupled with a strong business sense. You proactively fetch information from various sources and analyze it for better understanding about how the business performs. Furthermore, you model and build AI tools that automate certain processes within the company. The solutions produced will be implemented to impact business results. The Data Scientist believes in a non-hierarchical culture of collaboration, transparency, safety, and trust. Working with a focus on value creation, growth and serving customers with full ownership and accountability. Delivering exceptional customer and business results Industry: Any (prefer – Manufacturing, Logistics); willingness to learn manufacturing systems (OT systems and data stores)Primary Responsibilities: Develop an understanding of business obstacles, create solutions based on advanced analytics and draw implications for model development Combine, explore, and draw insights from data. Often large and complex data assets from different parts of the business. Design and build explorative, predictive- or prescriptive models, utilizing optimization, simulation, and machine learning techniques Prototype and pilot new solutions and be a part of the aim of ‘productifying’ those valuable solutions that can have an impact at a global scale Guides and coaches other chapter colleagues to help solve data/technical problems at an operational level, and in methodologies to help improve development processes Identifies and interprets trends and patterns in complex data sets to enable the business to make data-driven decisions Regards, Sudarshini