Couture.ai is building a patent-pending AI platform targeted towards vertical-specific solutions. The platform is already licensed by Reliance Jio and few European retailers, to empower real-time experiences for their combined >200 million end users. The founding team consists of BITS Pilani alumni with experience of creating global startup success stories. The core team, we are building, consists of some of the best minds in India in artificial intelligence research and data engineering. We are looking for multiple different roles with 2-7 year of research/large-scale production implementation experience with: - Rock-solid algorithmic capabilities. - Production deployments for massively large-scale systems, real-time personalization, big data analytics, and semantic search. - Or credible research experience in innovating new ML algorithms and neural nets. Github profile link is highly valued. For right fit into the Couture.ai family, compensation is no bar.
Couture.ai is building a patent-pending AI platform targeted towards vertical-specific solutions. The platform is already licensed by Reliance Jio and few European retailers, to empower real-time experiences for their combined >200 million end users. For this role, credible display of innovation in past projects is a must. We are looking for hands-on leaders in data engineering with the 5-11 year of research/large-scale production implementation experience with: - Proven expertise in Spark, Kafka, and Hadoop ecosystem. - Rock-solid algorithmic capabilities. - Production deployments for massively large-scale systems, real-time personalization, big data analytics and semantic search. - Expertise in Containerization (Docker, Kubernetes) and Cloud Infra, preferably OpenStack. - Experience with Spark ML, Tensorflow (& TF Serving), MXNet, Scala, Python, NoSQL DBs, Kubernetes, ElasticSearch/Solr in production. Tier-1 college (BE from IITs, BITS-Pilani, IIITs, top NITs, DTU, NSIT or MS in Stanford, UC, MIT, CMU, UW–Madison, ETH, top global schools) or exceptionally bright work history is a must. Let us know if this interests you to explore the profile further.
Couture.ai is building a patent-pending AI platform targeted towards vertical-specific solutions. The platform is already licensed by Reliance Jio and few European retailers, to empower real-time experiences for their combined >200 million end users. For this role, credible display of innovation in past projects is a must. We are looking for multiple different roles with 2-8 year of research/large-scale production implementation experience with: - Proven expertise in Spark, Kafka, and Hadoop ecosystem. - Rock-solid algorithmic capabilities. - Production deployments for massively large-scale systems, real-time personalization, big data analytics, and semantic search. - Or experience with Spark ML, Tensorflow (& TF Serving), MXNet, Scala, Python, NoSQL DBs, Kubernetes or ElasticSearch/Solr in production. Tier-1 college (BE from IITs, BITS-Pilani, IIITs, top NITs, or MS in Stanford, Berkley, CMU, UW–Madison) or exceptionally bright work history is a must. Let us know if this interests you to explore the profile further.
Requirements: - Strong background in Linux fundamentals and system administration - A good command on coding with scripting languages like Python and Shell scripting - Experience with Docker and any one of container management systems like Kubernetes, Docker Swarm or Apache mesos - Ability to use a wide variety of open source technologies and cloud services like AWS, GCP or Azure - Experience with automation/configuration management using either Puppet, Chef or an equivalent - Experience in managing In-memory databases like Redis, Memcached - Experience with messaging systems like Kafka, RabbitMQ or ActiveMQ - Knowledge of best practices and IT operations in an always-up, always-available service - Good experience with monitoring and alerting systems like Nagios, Zabbix - Experience with CI and CD tools Job Description: - Manage the Cloud Infrastructure. - Explore new and more efficient opensource modules and bring them into the BOT detection infrastructure. - Explore new kubernetes features and implement them to build scalable infrastructure. - Implement effective monitoring tools and proactively automate the day to day activities. - Linux Administration - Managing Office Network and hardware. - Implement policies dictated by compliance team to secure Shieldswuare backend. - Follow the tracker and resolve defects. - Understanding the Product and Managing the staging / production setup.
We are looking for a full time senior resource to lead the python driven dev team. Candidate will be responsible for design, dev and taking to production highly scalable applications. Opportunity to work on leading ML frameworks as well as custom built frameworks for enhanced financial analytics. Crediwatch is an automated & intelligent data curation platform which helps businesses make faster and smarter decisions. Crediwatch aids sophisticated credit and other risk assessment models by providing data intelligence, predictive analysis, decision enabling technologies which maximises customer profitability and performance. Crediwatch has received accolades in Citibank Tech4Integrity challenge (worldwide), Barclays Rise accelerator and Tech30 by YourStory to name a few. We are based in the heart of Bangalore and are growing fast.
Experience: 7+ years Job Responsibilities: Managing the cloud deployment with thousands of VM’s and Containers with 100% uptime Budgeting the infra costs and plan for continued cost optimisation Managing and motivating the team members Designing the architecture to scale the back-end to meet the business requirements Requirements: Strong background in Linux fundamentals and system administration A good command on coding with scripting languages like Python and Shell scripting Experience with Docker and any one of container management systems like Kubernetes, Docker Swarm or Apache mesos Ability to use a wide variety of open source technologies and cloud services like AWS, GCP or Azure Experience with automation/configuration management using either Puppet, Chef or an equivalent Experience in managing In-memory databases like Redis, Memcached Experience with messaging systems like Kafka, RabbitMQ or ActiveMQ Knowledge of best practices and IT operations in an always-up, always-available service Good team management skills and communications skills Good experience with monitoring and alerting systems like Nagios, Zabbix Experience with CI and CD tools
RESPONSIBILITIES: 1. Full ownership of Tech right from driving product decisions to architect to deployment. 2. Develop cutting edge user experience and build cutting edge technology solutions like instant messaging in poor networks, live-discussions, live-videos optimal matching. 3. Using Billions of Data Points to Build User Personalisation Engine 4. Building Data Network Effects Engine to increase Engagement & Virality 5. Scaling the Systems to Billions of Daily Hits. 6. Deep diving into performance, power management, memory optimisation & network connectivity optimisation for the next Billion Indians 7. Orchestrating complicated workflows, asynchronous actions, and higher order components 8. Work directly with Product and Design teams REQUIREMENTS: 1. Should have Hacked some (computer or non-computer) system to your advantage. 2. Built and managed systems with a scale of 10Mn+ Daily Hits 3. Strong architectural experience 4. Strong experience in memory management, performance tuning and resource optimisations 5. PREFERENCE- If you are a woman or an ex-entrepreneur or having a CS bachelor’s degree from IIT/BITS/NIT P.S. If you don't fulfil one of the requirements, you need to be exceptional in the others to be considered.
Technical Lead – Big data analytics We are looking for a senior engineer to work on our next generation marketing analytics platform. The engineer should have working experience in handling big sets of raw data and transforming them into meaningful insights using any of these tools - Hive/Presto/Spark, Redshift, Kafka/Kinesis etc. LeadSquared is a leading customer acquisition SaaS platform used by over 15,000 users across 25 countries to run their sales and marketing processes. Our goal is to have million+ users on our platform in the next 5 years, which is an extraordinary and exciting challenge for Engineering team to work on. The Role LeadSquared is looking for a senior engineer to be part of Marketing Analytics platform where we are building a system to gather multi-channel customer behavior data and generate meaningful insights and actions to eventually accelerate revenues. The individual will work in a small team to build the system to ingest large volumes of data, and setup ways to transform the data to generate insights as well as real-time interactive analytics. Requirements • Passion for building and delivering great software. • Ability to work in a small team and take full ownership and responsibility of critical projects • 5+ years of experience in data-driven environment designing and building business applications • Strong software development skills in one or more programming languages (Python, Java or C#) • Atleast 1-year experience in distributed analytic processing technologies such as Hadoop, Hive, Pig, Presto, MapReduce, Kafka, Spark etc. Basic Qualifications • Strong understanding of Distributed Computing Principles • Proficiency with Distributed file\object storage systems like HDFS • Hands-on experience with computation frameworks like Spark Streaming, MapReduce V2 • Effectively implemented one of big data ingestion and transformation pipelines e.g Kafka, Kinesis, Fluentd, LogStash, ELK stack • Database proficiency and strong experience in one of NoSQL data store systems e.g MongoDB, HBase, Cassandra • Hands-on working knowledge of data warehouse systems e.g Hive, AWS Redshift • Participated in scaling and processing of large sets of data [in the order of Petabytes] Preferred Qualifications • Expert level proficiency in SQL. Ability to perform complex data analysis with large volumes of data • Understanding of ad-hoc interactive query engines like Apache Drill, Presto, Google Big Query, AWS Athena • Exposure to one or more search stores like Solr, ElasticSearch is a plus • Experience working with distributed messaging systems like RabbitMQ • Exposure to infrastructure automation tools like Chef
www.aaknet.co.in/careers/careers-at-aaknet.html You are extra-ordinary, a rock-star, hardly found a place to leverage or challenge your potential, did not spot a sky rocketing opportunity yet? Come play with us – face the challenges we can throw at you, chances are you might be humiliated (positively); do not take it that seriously though! Please be informed, we rate CHARACTER, attitude high if not more than your great skills, experience and sharpness etc. :) Best wishes & regards, Team Aak!
Brief About the Company EdGE Networks Pvt. Ltd. is an innovative HR technology solutions provider focused on helping organizations meet their talent-related challenges. With our expertise in Artificial Intelligence, Semantic Analysis, Data Science, Machine Learning and Predictive Modelling, we enable HR organizations to lead with data and intelligence. Our solutions significantly improve workforce availability, billing, allocation and drive straight bottom line impacts. For more details, please logon to www.edgenetworks.in and www.hirealchemy.com Do apply if you meet most of the following requirements. Very strong in Python, Java or Scala experience, especially in an open source, data-intensive, distributed environments Work experience in Libraries like Scikit-learn, numpy, scipy, cython. Expert in Spark, MapReduce, Pig, Hive, Kafka, Storm, etc. including performance tuning. Implemented complex projects dealing with the considerable data size and with high complexity Good understanding of algorithms, data structure, and performance optimization techniques. Excellent problem solver, analytical thinker, and a quick learner. Search capabilities such as ElasticSearch with experience in MongoDB Nice to have: Must have excellent written and verbal communication skills Have experience writing Spark and/or Map Reduce V2 Be able to translate from requirements and or specifications to code that is relatively bug-free. Write unit and integration tests Knowledge of c++. Knowledge of Theano, Tensorflow, Caffe, Torch etc.
Develop analytic tools, working on BigData and Distributed systems. - Provide technical leadership on developing our core Analytic platform - Lead development efforts on product features using Scala/Java -Demonstrable excellence in innovation, problem solving, analytical skills, data structures and design patterns - Expert in building applications using Spark and Spark Streaming -Exposure to NoSQL: HBase/Cassandra, Hive and Pig -Latin, Mahout -Extensive experience with Hadoop and Machine learning algorithms