Job Description : We are looking for someone who can work with the platform or analytics vertical to extend and scale our product-line. Every product line has the dependency on other products within LimeTray eco-system and SSE- 2 is expected to collaborate with different internal teams to own stable and scalable releases. While every product-line has their own tech stack - different products have different technologies and it's expected that Person is comfortable working across all of them as and when needed. Some of the technologies/frameworks that we work on - Microservices, Java, Node, MySQL, MongoDB, Angular, React, Kubernetes, AWS, Python Requirements : - Minimum 3-year work experience in building, managing and maintaining Python based backend applications - B.Tech/BE in CS from Tier 1/2 Institutes - Strong Fundamentals of Data Structures and Algorithms - Experience in Python & Design Patterns - Expert in git, unit tests, technical documentation and other development best practises - Worked with SQL & NoSQL databases (Cassandra, MYSQL) - Understanding of async programming. Knowledge in handling messaging services like pubsub or streaming (Eg: Kafka, ActiveMQ, RabbitMQ) - Understanding of Algorithm, Data structures & Server Management - Understanding microservice or distributed architecture - Delivered high-quality work with a significant contribution - Experience in Handling small teams - Has good debugging skills - Has good analytical & problem-solving skills What we are looking for : - Ownership Driven - Owns end to end development - Team Player - Works well in a team. Collaborates with & outside the team. - Communication - Speaks and writes clearly and articulately. Maintains this standard in all forms of written communication including email. - Proactive & Persistence - Acts without being told to and demonstrates a willingness to go the distance to get something done - Develops emotional bonding for the product and does what is good for the product. - Customer first mentality. Understands customers pain and works towards the solutions. - Honest & always keeps high standards. - Expects the same form the team - Strict on Quality and Stability of the product.
Responsibilities: You will interact directly with colleagues across all responsibility areas and Director Of Engineering. The successful candidate for this position: - Designs and implements well-architected and scalable solutions - Collaborate with various teams in releasing high-quality software - Performs code reviews and contributes to healthy coding conventions - Assists in integration with customer systems - Provides timely responses to internal technical questions - Demonstrates leadership skills in navigating through tense periods and keeping calm Our Culture: - Integrity and motivation is more important than skill and experience - Cross-company team building and collaboration - Diverse background and highly talented & passionate group of individuals Ideal Candidate: The ideal candidate is a senior engineer having substantial development experience and high standards for code quality & maintainability. Basic Qualifications: - 4-year degree in Computer Science or Computer Engineering Preferred Qualifications: - 5+ years of development experience - Experience in Java or Scala - Experience with all parts of SDLC including CI/CD and testing methodologies - Experience in working with NoSQL technologies and message queue management - Self-motivated and able to work with minimum guidance. - Experience in a startup or rapid-growth product or project - Comfortable with modern version control, and agile development Bonus Points: - Experience in working with micro-services, containers or big data technologies - Working knowledge of cloud technologies like GCE and AWS - Writes blog posts and has a strong record on StackOverflow and similar sites
This role will be responsible for developing and deploying a game-changing and highly-disruptive advertising technology platform. This person would also take on the following responsibilities: Gather and process raw data at scale (including writing scripts, web scraping, calling APIs, write SQL queries, etc.) Work closely with our engineering team to integrate your amazing innovations and algorithms into our production systems Support business decisions with ad hoc analysis as needed Propose and investigate new techniques Troubleshoot production issues and identify practical solutions Routine check-up, back-up and monitoring of the entire MySQL and Hadoop ecosystem Take end-to-end responsibility of the Traditional Databases (MySQL), Big Data ETL, Analysis and processing Life Cycle in the organization Build, deploy and maintain real-time streaming pipelines and real-time analytics Manage deployments of big-data clusters across private and public cloud platforms
Job Description We are looking for someone who can work in the Analytics vertical to extend and scale our product-line. Every product line has the dependency on Analytics vertical within LimeTray eco-system and SSE-2 is expected to collaborate with internal teams to own stable and scalable releases. While every product-line has their own tech stack and it's expected that candidate is comfortable working across all of them as and when needed. Some of the technologies/frameworks that we work on - Microservices, Java, Node, Python, MySQL, MongoDB, Cassandra, Angular, React, Kubernetes, AWS Requirements: Minimum 4-years work experience in building, managing and maintaining Analytics applications B.Tech/BE in CS/IT from Tier 1/2 Institutes Experience in Handling small teams Strong Fundamentals of Data Structures and Algorithms Good analytical & problem-solving skills Strong hands-on experience in Python In-depth Knowledge of queueing systems (Kafka/ActiveMQ/RabbitMQ) Experience in building Data pipelines & Real-time Analytics Systems Experience in SQL (MYSQL) & NoSQL (Mongo/Cassandra) databases is a plus Understanding of Service Oriented Architecture Delivered high-quality work with a significant contribution Expert in git, unit tests, technical documentation and other development best practices What we are looking for: Customer first mentality. Develops emotional bonding for the product and does right prioritization. Ownership Driven - Owns end to end development Proactive & Persistence - Acts without being told to and demonstrates a willingness to go the distance to get something done Strict on Quality and Stability of the product. Communication - Speaks and writes clearly and articulately. Maintains this standard in all forms of written communication. Team Player
<h3>Job Profile</h3> Hands-on experience working with elastic search 5.x or 2.x Hands-on experience programming in Python or Node JS Understand product requirements and map them to leverage relevant elastic search features In depth understanding of analyzers, mappers, nested queries, aggregations, synonyms, significant terms etc In depth understanding of scoring (plus function score/custom scripting) Experience in handling large indexes, sharding and maintaining production level clusters Experience working with add-on tools like Kibana, Logstash, Graph, Machine Learning etc will be an added advantage <h3>Required experience</h3> 2 to 7 years <h3>Required qualification</h3> Strong foundation in computer science, with strong competencies in data structures, algorithms, and software design. Bachelors or Masters Degree in Computer Science or Engineering.
It is my pleasure to introduce you to IDEAS2IT Technologies, Chennai. If you are looking for a challenging position as a BigData engineer solving complex business problems by applying the latest in Data Science, Machine Learning and AI read on. Ideas2IT is a high-end product engineering firm that rolls out its own products and also helps Silicon Valley firms with their product engineering. We are looking for above average programmers to be part of our Data Science Lab. You will be working on projects like An AI platform built using Google TensorFlow for a predictive hiring product. Betting odds platform that to match odds offered to leverage spreads PPO platform for predictive pricing and promotions for enterprise eCommerce. Part of your tool set will be Google TensorFlow, Python ML frameworks, Apache Spark, R, Google BigQuery, Scala / Octave, Kafka and so on. If you have any relevant experience great! If not, it doesn't matter. We believe in hiring people with high IQ and the right attitude over ready-made skills. As long as you are passionate about building world class enterprise products and understand whatever technology that you are working on in-depth, we will bring you up to speed on all the technologies we use. Oh BTW, did we mention that you need to be super smart? Sounds interesting? Ideas2IT is a high-end product firm. Started by an ex-Googler, Murali Vivekanandan, we count Siemens, Motorola, eBay, Microsoft and Zynga among our clients. We solve some very interesting problems in the USA startup ecosystem and have created great products in the process. When we build, we build great! We actively contribute to open source projects. We've built our own frameworks. We're betting the house on Big Data, and with a Stanford grad leading the team, we're sure to win. We have rolled 2 of our products as separate companies last year and raised institutional funds - Pipecandy, Idearx.
Dear Candidate, Please find below details : Ruby on Rails Developer Years of experience- 3 to 6 years Required Skills Ruby, Ruby on Rails, Experience in developing Web application using Ruby, RoR Databases: PostgreSQL Added advantages if candidates knows REST OS: Linux Please share your details across firstname.lastname@example.org with below details Total Exp: Rel Exp: Current CTC: Expected CTC: Notice Period: Niyuj is a product engineering company that engages with the customer at different levels in the product development lifecycle in order to build quality products, on budget and on time. Founded in 2007 by passionate technology leader, Stable and seasoned leadership with hands-on experience working or consulting companies from bootstrapped start-ups to large multinationals. Global experience in US, Australia & India, Worked with fortune 500 companies to prominent startups, clients include Symantec, Vmware, Carbonite, Edgewater networks Domain Areas we work for : CLOUD SERVICES - Enterprises are rushing to incorporate cloud computing, big data, and mobile into their IT infrastructures. BIG-DATA ANALYTICS - Revolutionizing the way Fortune 1000 companies harness billions of data and turn it into a competitive advantage. NETWORK AND SECURITY - Network and security-related system level work that meets customer demands and deliver real value Our Prime customer, Carbonite, is Americas #1 cloud backup and Storage Company, with over 1.5 million customers and headquartered in Boston MA, with offices in 15 locations across the world. Your potential for exponential growth: Your experience and expertise would be a great addition to our team, and you will have an opportunity to work closely with industry leaders, literally sitting across the table and jointly building the future with folks who are noted gurus and industry veterans from prestigious institutions like IIT's and top US universities with industry experience in fortune 500 companies like EMC, Symantec and VERITAS.
As a DevOps Engineer, you will be responsible for managing and building upon the infrastructure that supports our data intelligence platform. You'll also be involved in building tools and establishing processes to empower developers to deploy and release their code seamlessly.Across teams, we will look up to you to make key decisions for our infrastructure, networking and security. You will also own, scale, and maintain the compute and storage infrastructure for various product teams.The ideal DevOps Engineers possess a solid understanding of system internals and distributed systems. They understand what it takes to work in a startup environment and have the zeal to establish a culture of infrastructure awareness and transparency across teams and products. They fail fast, learn faster, and execute almost instantly.Technology Stack: Configuration management tools (Ansible/Chef/Puppet), Cloud Service Providers (AWS/DigitalOcean), Docker+Kubernetes ecosystem is a plus.WHY YOU?* Because you love to take ownership of the infrastructure allowing developers to deploy and manage microservices at scale.* Because you love tinkering with and building upon new tools and technologies to make your work easier and streamlined with the industry's best practices.* Because you have the ability to analyze and optimize performance in high-traffic internet applications.* Because you take pride in building scalable and fault-tolerant infrastructural systems.* Because you see explaining complex engineering concepts and design decisions to the less tech savvy as an interesting challenge.IN MONTH 1, YOU WILL...* Learn about the products and internal tools that power our data intelligence platform.* Understand the underlying infrastructure and play around with the tools used to manage it.* Get familiar with the current architectural challenges arising from handling data and web traffic at scale.IN MONTH 3, YOU WILL...* Become an integral part of the architectural decisions taken across teams and products.* Play a pivotal role in establishing a culture of infrastructure awareness and transparency across the company.* Become the go-to person for engineers to get help solving issues with performance and scale.IN MONTH 6 (AND BEYOND), YOU WILL...* Hire a couple of engineers to strengthen the team and build systems to help manage high-volume and high-velocity data.* Be involved, along with the DevOps team, in tasks ranging from tracking statistics and managing alerts to deploying new hosts and debugging intricate production issues.About SocialCopsSocialCops is a data intelligence company that is empowering leaders in organizations globally including the United Nations & Unilever. Our platform powers over 150 organizations across 28 countries. As a pioneering tech startup, SocialCops was recognized in the list of Technology Pioneers 2018 by World Economic Forum and by the New York Times in the list of 30 global visionaries. We were also part of the Google Launchpad Accelerator 2018. Aasaan jobs named SocialCops as one of the best Indian startups to work for in 2018.Read more about our work and case studies: https://socialcops.com/case-studies/Watch our co-founder's TEDx talk on how big data can influence decisions that matter: https://www.youtube.com/watch?v=C6WKt6fJisoWant to know how much impact you can drive in under a year at SocialCops? See our 2017 year in review: https://socialcops.com/2017/For more information on our hiring process, check out our blog: https://blog.socialcops.com/inside-sc/team-culture/interested-joining-socialcops-team-heres-need/
Role Brief: 6 + years of demonstrable experience designing technological solutions to complex data problems, developing & testing modular, reusable, efficient and scalable code to implement those solutions. Brief about Fractal & Team : Fractal Analytics is Leading Fortune 500 companies to leverage Big Data, analytics, and technology to drive smarter, faster and more accurate decisions in every aspect of their business. Our Big Data capability team is hiring technologists who can produce beautiful & functional code to solve complex analytics problems. If you are an exceptional developer and who loves to push the boundaries to solve complex business problems using innovative solutions, then we would like to talk with you. Job Responsibilities : Provides technical leadership in BigData space (Hadoop Stack like M/R, HDFS, Pig, Hive, HBase, Flume, Sqoop, etc..NoSQL stores like Cassandra, HBase etc) across Fractal and contributes to open source Big Data technologies. Visualize and evangelize next generation infrastructure in Big Data space (Batch, Near RealTime, RealTime technologies). Evaluate and recommend Big Data technology stack that would align with company's technology Passionate for continuous learning, experimenting, applying and contributing towards cutting edge open source technologies and software paradigms Drive significant technology initiatives end to end and across multiple layers of architecture Provides strong technical leadership in adopting and contributing to open source technologies related to BigData across the company. Provide strong technical expertise (performance, application design, stack upgrades) to lead Platform Engineering Defines and Drives best practices that can be adopted in BigData stack. Evangelizes the best practices across teams and BUs. Drives operational excellence through root cause analysis and continuous improvement for BigData technologies and processes and contributes back to open source community. Provide technical leadership and be a role model to data engineers pursuing technical career path in engineering Provide/inspire innovations that fuel the growth of Fractal as a whole EXPERIENCE : Must Have : Ideally, This Would Include Work On The Following Technologies Expert-level proficiency in at-least one of Java, C++ or Python (preferred). Scala knowledge a strong advantage. Strong understanding and experience in distributed computing frameworks, particularly Apache Hadoop 2.0 (YARN; MR & HDFS) and associated technologies -- one or more of Hive, Sqoop, Avro, Flume, Oozie, Zookeeper, etc.Hands-on experience with Apache Spark and its components (Streaming, SQL, MLLib) is a strong advantage. Operating knowledge of cloud computing platforms (AWS, especially EMR, EC2, S3, SWF services and the AWS CLI) Experience working within a Linux computing environment, and use of command line tools including knowledge of shell/Python scripting for automating common tasks Ability to work in a team in an agile setting, familiarity with JIRA and clear understanding of how Git works. A technologist - Loves to code and design In addition, the ideal candidate would have great problem-solving skills, and the ability & confidence to hack their way out of tight corners. Relevant Experience : Java or Python or C++ expertise Linux environment and shell scripting Distributed computing frameworks (Hadoop or Spark) Cloud computing platforms (AWS) Good to have : Statistical or machine learning DSL like R Distributed and low latency (streaming) application architecture Row store distributed DBMSs such as Cassandra Familiarity with API design Qualification: B.E/B.Tech/M.Tech in Computer Science or related technical degree OR Equivalent
ITTStar global services is subsidiary unit in Bengaluru with head office in Atlanta, Georgia. We are primarily into data management and data life cycle solutions, which includes machine learning and artificial intelligence. For further info, visit ITTstar.com . As discussed over the call, I am forwarding the job description. We are looking for enthusiastic and experienced data engineers to be part of our bustling team of professionals for our Bengaluru location. JOB DESCRIPTION: 1. Experience in Spark & Big Data is mandatory. 2. Strong Programming Skills in Python / Java / Scala /Node.js. 3. Hands on experience handling multiple data types JSON/XML/Delimited/Unstructured. 4. Hands on experience working at least one Relational and/or NoSQL Databases. 5. Knowledge on SQL Queries and Data Modeling. 6. Hands on experience working in ETL Use cases either in On-premise or Cloud. 7. Experience in any Cloud Platform (AWS, Azure, GCP, Alibaba). 8. Knowledge in one or more AWS Services like Kinesis, EC2, EMR, Hive Integration, Athena, FireHose, Lambda, S3, Glue Crawler, Redshift, RDS is a plus. 9. Good Communication Skills and Self Driven - should be able to deliver the projects with minimum instructions from Client.
Job Skill Requirements: • 4+ years of experience building and managing complex products/solutions • 2+ experience in DW/ELT/ETL technologies-Nice to have • 3+ years of hands on development experience using Big Data Technologies like: Hadoop, SPARK • 3+ years of hands on development experience using Big Data eco system components like: Hive, Impala,HBase, Sqoop, Oozie etc… • Proficient level programming in Scala. • Good to have hands on experience building webservices in Python/Scala stack. • Good to have experience developing Restful web services • Knowledge of web technologies and protocols (NoSQL/JSON/REST/JMS)
• Looking for Big Data Engineer with 3+ years of experience. • Hands-on experience with MapReduce-based platforms, like Pig, Spark, Shark. • Hands-on experience with data pipeline tools like Kafka, Storm, Spark Streaming. • Store and query data with Sqoop, Hive, MySQL, HBase, Cassandra, MongoDB, Drill, Phoenix, and Presto. • Hands-on experience in managing Big Data on a cluster with HDFS and MapReduce. • Handle streaming data in real time with Kafka, Flume, Spark Streaming, Flink, and Storm. • Experience with Azure cloud, Cognitive Services, Databricks is preferred.
Analytical Skills: Have to work with large amounts of data. You will need to see through the data and analyze it to find conclusions. Need maths skills to estimate numerical data. Communication Skills: You will need to write and speak clearly, easily communicating complex ideas. Critical Thinking: Must look at the numbers, trends, and data and come to new conclusions based on the findings. Attention to Detail: So be vigilant in the analysis to come to correct conclusions. Programming Skills: Should know python programming, R, Mysql, and Hadoop. Deep learning Skills: Should have worked on machine learning & deep learning.
Job Requirement Installation, configuration and administration of Big Data components (including Hadoop/Spark) for batch and real-time analytics and data hubs Capable of processing large sets of structured, semi-structured and unstructured data Able to assess business rules, collaborate with stakeholders and perform source-to-target data mapping, design and review. Familiar with data architecture for designing data ingestion pipeline design, Hadoop information architecture, data modeling and data mining, machine learning and advanced data processing Optional - Visual communicator ability to convert and present data in an easy comprehensible visualization using tools like D3.js, Tableau To enjoy being challenged, solve complex problems on a daily basis Proficient in executing efficient and robust ETL workflows To be able to work in teams and collaborate with others to clarify requirements To be able to tune Hadoop solutions to improve performance and end-user experience To have strong co-ordination and project management skills to handle complex projects Engineering background
Job Title: Software Developer – Big Data Responsibilities We are looking for a Big Data Developer who can drive innovation and take ownership and deliver results. • Understand business requirements from stakeholders • Build & own Mintifi Big Data applications • Be heavily involved in every step of the product development process, from ideation to implementation to release. • Design and build systems with automated instrumentation and monitoring • Write unit & integration tests • Collaborate with cross functional teams to validate and get feedback on the efficacy of results created by the big data applications. Use the feedback to improve the business logic • Proactive approach to turn ambiguous problem spaces into clear design solutions. Qualifications • Hands-on programming skills in Apache Spark using Java or Scala • Good understanding about Data Structures and Algorithms • Good understanding about relational and non-relational database concepts (MySQL, Hadoop, MongoDB) • Experience in Hadoop ecosystem components like YARN, Zookeeper would be a strong plus
Responsibilities: Design and develop ETL Framework and Data Pipelines in Python 3. Orchestrate complex data flows from various data sources (like RDBMS, REST API, etc) to the data warehouse and vice versa. Develop app modules (in Django) for enhanced ETL monitoring. Device technical strategies for making data seamlessly available to BI and Data Sciences teams. Collaborate with engineering, marketing, sales, and finance teams across the organization and help Chargebee develop complete data solutions. Serve as a subject-matter expert for available data elements and analytic capabilities. Qualification: Expert programming skills with the ability to write clean and well-designed code. Expertise in Python, with knowledge of at least one Python web framework. Strong SQL Knowledge, and high proficiency in writing advanced SQLs. Hands on experience in modeling relational databases. Experience integrating with third-party platforms is an added advantage. Genuine curiosity, proven problem-solving ability, and a passion for programming and data.
About us: GreyAtom is a Mumbai-based Ed-tech company, specializing in upskilling tech professionals through harnessing the power of data science. We are a turnkey solution to upgrading your skill set and career prospects. Data Science team at GreyAtom is going forward with mission of building systemic intelligence across GreyAtom product and ecosystem. We are looking forward to team player who understands Software Engineering, Data Science and has good grasp on business. Some Of The Problems We Are Focusing On Currently Are What is learner’s competency across various modules? How does a learner compare against other people in ecosystem ? Does my learning behaviour match that of people who got jobs in Data Science? Attrition Alert for the Student Personalization of learning path for each student Factors that predict learner’s success or drop out risk Mapping the Skill & Competency matrix for each learner At GreyAtom, Data scientists are embedded with the engineering and product team for the problem they are working on. This ensures that the data science solutions are envisioned along with product delivery. We have a very flat structure within the Data Science team, which enables us to focus on excellence and create a deep sense of ownership. Also being a young team we are able to democratize the process of problem selection. Our techniques span classification, clustering, matrix factorization, graphical models, networks and graph algorithms, topic modeling, image processing, deep learning and NLP, each one of them being exercised at a fairly large scale. If you want to challenge the state of the art and want to impact the wide open landscape in India, GreyAtom Data Science team is the place for you. A passionate data scientist who has experience in executing and evangelizing the Machine Learning or AI technologies in solving business problems resulting in uncompromised user experience cost savings and eliciting business insights buried in big data. Your Impact towards CA In this role, you'll help support GreyAtom charter to build Dataware by: Communicating with scientists as well as engineers. Bringing about significant innovation and solving complex problems in projects based on analytics May have indirect reports and manage a small project team. Mentoring, training, developing and serving as a knowledge resource for less experienced Software Engineers and Data professionals. Work and collaborate with the Product team to build Data Science into Commit.Live Conceptualise, design and deliver high-quality solutions and insightful analysis Conduct research and prototyping innovations; data and requirements gathering; solution scoping and architecture; Skills Required: Typically, 3 or more years of experience executing on projects as a lead and analytic computing experience. Mathematical skills including Statistics fundamentals, Statistical Modelling, Regression analysis, Time Series, Decision Trees, Correlation (Clustering, Association rules, K-Nearest Neighbours) Analytical skills include Data Analytics, Data Modelling, Machine Learning, Text Mining, Optimization Simulation skills (Genetic Algorithms, Monte Carlo Simulations, Linear Programming, Quadratic Programming etc) Data-Driven Problem Solving and Data Munging Papers published in journals in ML/AI area will have added advantage Hands-on experience of Python Understanding and manipulation of unstructured data Has experience with one or more cloud or devops services like AWS, Docker etc. Good business acumen of any vertical, preferably Edtech/ Learning Analytics vertical
Looking for extremely smart software engineers who can solve complex distributed software issues. Someone who has handled lots of structured and unstructured data is preferred.
RESPONSIBILITIES: 1. Full ownership of Tech right from driving product decisions to architect to deployment. 2. Develop cutting edge user experience and build cutting edge technology solutions like instant messaging in poor networks, live-discussions, live-videos optimal matching. 3. Using Billions of Data Points to Build User Personalization Engine. 4. Building Data Network Effects Engine to increase Engagement & Virality. 5. Scaling the Systems to Billions of Daily Hits. 6. Deep diving into performance, power management, memory optimization & network connectivity optimization for the next Billion Indians. 7. Orchestrating complicated workflows, asynchronous actions, and higher order components. 8. Work directly with Product and Design teams. REQUIREMENTS: 1. Should have Hacked some (computer or non-computer) system to your advantage. 2. Built and managed systems with a scale of 10Mn+ Daily Hits 3. Strong architectural experience. 4. Strong experience in memory management, performance tuning and resource optimizations. 5. PREFERENCE- If you are a woman or an ex-entrepreneur or having a CS bachelors degree from IIT/BITS/NIT. P.S. If you don't fulfill one of the requirements, you need to be exceptional in the others to be considered.
Description Must have Direct Hands- on, 4 years of experience, building complex Data Science solutions Must have fundamental knowledge of Inferential Statistics Should have worked on Predictive Modelling, using Python / R Experience should include the following, File I/ O, Data Harmonization, Data Exploration Machine Learning Techniques (Supervised, Unsupervised) Multi- Dimensional Array Processing Deep Learning NLP, Image Processing Prior experience in Healthcare Domain, is a plus Experience using Big Data, is a plus Should have Excellent Analytical, Problem Solving ability. Should be able to grasp new concepts quickly Should be well familiar with Agile Project Management Methodology Should have excellent written and verbal communication skills Should be a team player with open mind
Description Deep experience and understanding of Apache Hadoop and surrounding technologies required; Experience with Spark, Impala, Hive, Flume, Parquet and MapReduce. Strong understanding of development languages to include: Java, Python, Scala, Shell Scripting Expertise in Apache Spark 2. x framework principals and usages. Should be proficient in developing Spark Batch and Streaming job in Python, Scala or Java. Should have proven experience in performance tuning of Spark applications both from application code and configuration perspective. Should be proficient in Kafka and integration with Spark. Should be proficient in Spark SQL and data warehousing techniques using Hive. Should be very proficient in Unix shell scripting and in operating on Linux. Should have knowledge about any cloud based infrastructure. Good experience in tuning Spark applications and performance improvements. Strong understanding of data profiling concepts and ability to operationalize analyses into design and development activities Experience with best practices of software development; Version control systems, automated builds, etc. Experienced in and able to lead the following phases of the Software Development Life Cycle on any project (feasibility planning, analysis, development, integration, test and implementation) Capable of working within the team or as an individual Experience to create technical documentation
Description Does solving complex business problems and real world challenges interest you Do you enjoy seeing the impact your contributions make on a daily basis Are you passionate about using data analytics to provide game changing solutions to the Global 2000 clients Do you thrive in a dynamic work environment that constantly pushes you to be the best you can be and more Are you ready to work with smart colleagues who drive for excellence in everything they do If you possess a solutions mind set , strong technological expertise , and commitment to be part of a tremendous journey , come join our growing , global team. See what Saama can do for your career and for your journey. Impact on the business: Candidate would play a key role in delivering success by leveraging Web and Big Data technologies and tools to fulfill client s business objectives. Responsibilities: Participate in requirement gathering sessions with Business users and stakeholders to understand the business needs. Understand functional and non - functional requirements and define technical Architecture and design to cater to the same. Produce a detailed technical design document to match the solution design specifications. Review and validate effort estimates produced by development team for design and build phases. Understand and apply company s solutions / frameworks to the design when needed. Collaborate with the development team to produce a technical specification for custom development and systems integration requirements. Participate and lead , when needed , the project meetings with the customer. Collaborate with senior architects in customer organization and convince / defend design and architecture decisions for the project. Be technical mentor to the development team. Required Skills Experience in designing scalable complex distributed systems. Hands on development experience in Big Data Hadoop ecosystem & Analytics space Experience working with Cloud Storage solutions in AWS , Azure etc. MS / BS degree in Computer Science , Mathematics , Engineering or related field. 12 years of experience as a technology leader designing and developing data architecture solutions with more than 2 years specializing in big data architecture or data analytics. Experience of implementing solutions using Big data technologies - Hadoop , Map / Reduce , Pig , Hive , Spark , Storm , Impala , Oozie , Flume , ZooKeeper , Sqoop etc Good understanding of NoSQL and prior experience working with NoSQL databases Hbase , MongoDB , Cassandra , Competencies: Self - starter who gets results with minimal support and direction in a fast - paced environment. Takes initiative; challenges the status quo to drive change. Learns quickly; takes smart risks to experiment and learn. Works well with others; builds trust and maintains credibility. Identifies and confirms key requirements in dynamic environments; anticipates tasks and contingencies. Strong analytical skills; able to apply creative thinking to generate solutions for complex problems Communicates effectively; productive communication with clients and all key stakeholders (both verbal and written communication).
Requirements: Minimum 4-years work experience in building, managing and maintaining Analytics applications B.Tech/BE in CS/IT from Tier 1/2 Institutes Strong Fundamentals of Data Structures and Algorithms Good analytical & problem-solving skills Strong hands-on experience in Python In depth Knowledge of queueing systems (Kafka/ActiveMQ/RabbitMQ) Experience in building Data pipelines & Real time Analytics Systems Experience in SQL (MYSQL) & NoSQL (Mongo/Cassandra) databases is a plus Understanding of Service Oriented Architecture Delivered high-quality work with a significant contribution Expert in git, unit tests, technical documentation and other development best practices Experience in Handling small teams
We are looking for a Machine Learning Developer who possesses apassion for machine technology & big data and will work with nextgeneration Universal IoT platform.Responsibilities:•Design and build machine that learns , predict and analyze data.•Build and enhance tools to mine data at scale• Enable the integration of Machine Learning models in Chariot IoTPlatform•Ensure the scalability of Machine Learning analytics across millionsof networked sensors•Work with other engineering teams to integrate our streaming,batch, or ad-hoc analysis algorithms into Chariot IoT's suite ofapplications•Develop generalizable APIs so other engineers can use our workwithout needing to be a machine learning expert
Responsibilities Ensure timely and top-quality product delivery Ensure that the end product is fully and correctly defined and documented Ensure implementation/continuous improvement of formal processes to support product development activities Drive the architecture/design decisions needed to achieve cost-effective and high-performance results Conduct feasibility analysis, produce functional and design specifications of proposed new features. · Provide helpful and productive code reviews for peers and junior members of the team. Troubleshoot complex issues discovered in-house as well as in customer environments. Qualifications · Strong computer science fundamentals in algorithms, data structures, databases, operating systems, etc. · Expertise in Java, Object Oriented Programming, Design Patterns · Experience in coding and implementing scalable solutions in a large-scale distributed environment · Working experience in a Linux/UNIX environment is good to have · Experience with relational databases and database concepts, preferably MySQL · Experience with SQL and Java optimization for real-time systems · Familiarity with version control systems Git and build tools like Maven · Excellent interpersonal, written, and verbal communication skills · BE/B.Tech./M.Sc./MCS/MCA in Computers or equivalent
Description Who We Are Bridge International Academies is the world s largest and fastest -growing chain of primary and pre -primary schools with more than 500 academies and 100,000 pupils in Kenya, Uganda, Nigeria, India, and Liberia. We democratize the right to succeed by giving families living in poverty access to the high -quality education that will allow their children to live a very different life. We leverage experts, data, and technology in order to standardize and scale every aspect of quality education delivery, from how and where academies are built to how teachers are selected and trained, and how lessons are delivered and monitored for improvement. We are vertically -integrated, tech -enabled, and on our way to profitability. Bridge expects to continue rapid expansion in 2018 across existing markets. The Bridge Offer Roughly 2.7 billion people live on less than $2 /day. In their communities, there is a huge gap between the education offered and the needs of the population. Too often the schools available to them fail to deliver for these families. The quality offered results in the average pupil from our communities in East Africa failing to reach proficiency in primary school and on average fail the primary exit exams that are critical to their development. Teachers are unresponsive and occasionally abusive, and fees are often unaffordable. Even government schools can cost families a significant amount of money after all the additional fees are added up. With 47% of classroom teaching time lost due to teacher absenteeism or neglect, 55% of families in our communities end up choosing private schools instead, but then fear for the stability and sustainability of their choice as many schools close after only a few years of service. Both the government schools and the private schools tend to lack well -conceived scope and sequences, instructional materials, student achievement data, and the capacity to react to that data. Families are actively searching for a better academic alternative. Enter Bridge International Academies. As of September 2017, Bridge operates more than 500 academies, serving roughly 100,000 pupils in Kenya, Uganda, Nigeria, India, and Liberia. Bridge utilises a scripted -learning education methodology coupled with 'big data' (all teachers have tablets for instruction, assessment, and data -gathering) that allows us to make curriculum a little better every day. With plans to enrol ten million students ten years from now, Bridge International Academies offers a tremendous opportunity to grow with one of the world s most exciting, ambitious, and socially conscious companies, with leadership roles available across a number of competencies and geographies. Tech at Bridge Technology plays a critical role at Bridge in enabling us to provide education at massive scale and low cost - it's one of the key elements that gives us the ability to deliver what no one else can. Tech spans several key functions, from the hardware and software that our academies use to run all aspects of teaching and management, including mobile payments, to the systems that enable our country headquarters to manage massive local operations, to the data backbone that informs all of our strategic and tactical decision making. It s a lot of custom software development and a lot of back office systems. We've got a ridiculously ambitious mission at Bridge, and it's a place where passionate technologists have a chance to directly change the world. No kidding. About the Role Tech at Bridge is a highly complex, vertically -integrated affair, with systems supporting an ever expanding range of functions and countries, and crossing between software development, IT operations, academy operations, and logistics /supply chain. At the same time, our teams run lean and things change fast - governments make policy decisions that affect us, launching new countries is a frenetic affair, and we still need to evolve our core technology offering. We are looking for a full time Senior Software Engineer to join our new Hyderabad -based cross -functional software development team, which will participate in building the software that powers and improves efficiency to enhance our competitive advantage. This person should be familiar with design and implementation issues specific to a data driven, highly scalable environments and be able to handle such issues with flexibility and ingenuity. The ideal candidate will have a strong customer focus, a proven track record of delivering high -quality products in a continuous delivery environment, and an appreciation for clean and simple code. Bridge especially values T -shaped team members - individuals with deep expertise in particular areas, but comfortable working across all parts of the technology stack. What You Will Do Assume ownership over the server -side architecture of the Bridge software platforms Design, implement, and support new products and features Analyse and improve the server -side architecture with a focus on maintainability and scalability Mentor and guide junior engineers, including performing code reviews Collaborate with project sponsors to elaborate requirements and facilitate trade -offs that maximise customer value Work with product and development teams to establish overall technical direction and product strategy What You Will Have You have a BA /BS in Computer Science or related technical field. You have 6 years of enterprise software development experience. You are comfortable recommending and advocating for enterprise architectural best practices for highly -available, scalable, and reliable implementations. You have direct experience integrating off -the -shelf and custom built software, and understand the trade -offs between building and buying software. You function well in a fast -paced, informal environment where constant change is the norm and the bar for quality is set high. You have enterprise -level experience with continuous delivery practices and tools (e.g Jenkins, Bamboo, GoCD, Octopus). Proficiency in test -driven development (TDD) and /or behaviour driven development (BDD) is required. You are in expert in four or more of the following areas and interested in learning the rest: C# /.NET Web services (esp. WebAPI or NancyFx; Richardson L2 ) Cloud environments (esp. AWS) and architectures /implementations (e.g. CQRS /ES, circuit breakers, messaging, etc.) Enterprise application performance monitoring (e.g. E.L.K., Nagios, NewRelic, Riverbed) System security (e.g. OWASP, OAuth) Infrastructure -as -Code (e.g. Puppet, Chef, Ansible, Docker, boxstarter, chocolatey /WinRM /powershell). MS SQL Server /T -SQL You must have worked in an agile delivery environment and understand not only the mechanics, but also the underlying motivations. Bridge is primarily a .NET shop (server -side), so experience in this area is preferable; however, Bridge also values developers with diverse experience, so serious exposure to other languages and ecosystems (e.g. NodeJS, Ruby, functional languages, NoSQL DBs) is a bonus. Bridge is a strong supporter of open source projects - familiarity with OSS projects is a plus; contributions to open source projects is a big plus. You re also A detailed doer - You have a track record of getting things done. You re organized and responsive. You take ownership of every idea you touch and execute it to a fine level of detail, setting targets, engaging others, and doing whatever it takes to get the job done. You can multi -task dozens of such projects at once and never lose sight of the details. Likely, you have some experience in a start -up or other rapid -growth company. A networking mastermind - You excel at meeting new people and turning them into advocates. You communicate in a clear, conscientious, and effective way in both written and oral speech. You can influence strangers in the course of a single conversation. Allies and colleagues will go to bat for your ideas. A creative problem -solver - Growing any business from scratch comes with massive and constant challenges. On top of that, Bridge works in volatile, low -resource communities and runs on fees averaging just $6 a month per pupil. You need to be flexible and ready to get everything done effectively, quickly, and affordably with the materials at hand. Every dollar you spend is a dollar our customers, who live on less than $2 a day, will have to pay for. A customer advocate - Our customers - these families living on less than $2 a day per person - never leave your mind. You know them, get them, have shared a meal with them (or would be happy to in the future). You would never shrink back from shaking a parent s hand or picking up a crying child, no matter what the person was wearing or looked like. Every decision you make considers their customer benefit, experience, and value. A life -long learner - You believe you can always do better. You welcome constructive criticism and provide it freely to others. You know you only get better tomorrow when others point out where you ve missed things or failed today.
As a Big Data Engineer, you will build utilities that would help orchestrate migration of massive Hadoop/Big Data systems onto public cloud systems. You would build data processing scripts and pipelines that serve several of jobs and queries per day. The services you build will integrate directly with cloud services, opening the door to new and cutting-edge re-usable solutions. You will work with engineering teams, co-workers, and customers to gain new insights and dream of new possibilities. The Big Data Engineering team is hiring in the following areas: • Distributed storage and compute solutions • Data ingestion, consolidation, and warehousing • Cloud migrations and replication pipelines • Hybrid on-premise and in-cloud Big Data solutions • Big Data, Hadoop and spark processing Basic Requirements: • 2+ years’ experience of Hands-on in data structures, distributed systems, Hadoop and spark, SQL and NoSQL Databases • Strong software development skills in at least one of: Java, C/C++, Python or Scala. • Experience building and deploying cloud-based solutions at scale. • Experience in developing Big Data solutions (migration, storage, processing) • BS, MS or PhD degree in Computer Science or Engineering, and 5+ years of relevant work experience in Big Data and cloud systems. • Experience building and supporting large-scale systems in a production environment. Technology Stack: Cloud Platforms – AWS, GCP or Azure Big Data Distributions – Any of Apache Hadoop/CDH/HDP/EMR/Google DataProc/HD-Insights Distributed processing Frameworks – One or more of MapReduce, Apache Spark, Apache Storm, Apache Flink. Database/warehouse – Hive, HBase, and at least one cloud-native services Orchestration Frameworks – Any of Airflow, Oozie, Apache NiFi, Google DataFlow Message/Event Solutions – Any of Kafka, Kinesis, Cloud pub-sub Container Orchestration (Good to have)– Kubernetes or Swarm
Sr Data Engineer Job Description About Us DataWeave is a Data Platform which aggregates publicly available data from disparate sources and makes it available in the right format to enable companies take strategic decisions using trans-firewall Analytics. It's hard to tell what we love more, problems or solutions! Every day, we choose to address some of the hardest data problems that there are. We are in the business of making sense of messy public data on the web. At serious scale! Requirements: - Building an intelligent and highly scalable crawling platform - Data extraction and processing at scale - Enhancing existing data stores/data models - Building a low latency API layer for serving data to power Dashboards, Reports, and Analytics functionality - Constantly evolving our data platform to support new features Expectations: - 4+ years of relevant industry experience. - Strong in algorithms and problem solving Skills - Software development experience in one or more general purpose programming languages (e.g. Python, C/C++, Ruby, Java, C#). - Exceptional coding abilities and experience with building large-scale and high-availability applications. - Experience in search/information retrieval platforms like Solr, Lucene and ElasticSearch. - Experience in building and maintaining large scale web crawlers. - In Depth knowledge of SQL and and No-Sql datastore. - Ability to design and build quick prototypes. - Experience in working on cloud based infrastructure like AWS, GCE. Growth at DataWeave - Fast paced growth opportunities at dynamically evolving start-up. - You have the opportunity to work in many different areas and explore wide variety of tools to figure out what really excites you.
-Precily AI: Automatic summarization, shortening a business document, book with our AI. Create a summary of the major points of the original document. AI can make a coherent summary taking into account variables such as length, writing style, and syntax. We're also working in the legal domain to reduce the high number of pending cases in India. We use Artificial Intelligence and Machine Learning capabilities such as NLP, Neural Networks in Processing the data to provide solutions for various industries such as Enterprise, Healthcare, Legal.
Must have skills: -Very strong coding skills on Core Java (1.5 and above) -Should be able to analyze complex code structures, data structures, algorithms/logic -Should have hands on knowledge of working on Java -Multithreading (juml)programs -Should have expertise in Java Collection framework -Must have good exposure on Struts/JSP services/Jquery/Ajax, Json-based UI rendering Good to have skills (not mandatory): -Good working knowledge on Java script/Jquery framework -Should have used HTML5/CSS5/Node.js/D3 framework in atleast one of the projects earlier -Hands on latest technologies like Cassandra, Solr, Hadoop would be an advantage -Knowledge on Graph structures would be desirable
DevOps Architect, responsible for designing & implementing the Devops related work task and clarify the System/Deployment related issue directly with customer
Work on different POC Experience in Java/J2ee programming and coding. many more ..
Strong background in Linux/Unix Administration. • Experience with CI Tools Like Jenkins etc • Experience with automation/configuration management using either Docker, Puppet, Ansible, Chef or an equivalent • Build, release and configuration management of production systems. • System troubleshooting and problem solving across platform and application domains. • Deploying, automating, maintaining and managing AWS cloud-based production system, to ensure the availability, performance, scalability and security of productions systems. • Pre-production Acceptance Testing to help assure the quality of our products / services. • Evaluate new technology options and vendor products. • Strong experience with SQL and MySQL (NoSQL experience is an addon) • Suggesting architecture improvements, recommending process improvements. • Understanding of cloud-based services, knowledge w.r.t Hosting e.g.: Amazon AWS etc Ensuring critical system security through using best in class cloud security solutions. • Ability to use a wide variety of open source technologies and cloud services (experience with AWS is an addon) • A working understanding of code and script (PHP, Python, Perl and/or Ruby) will be an addon. • Knowledge of Ant, Maven or other Build and Release tools will be an addon. • AWS: 2+ years’ experience with using a broad range of AWS technologies (e.g. EC2, RDS, ELB, EBD, S3, VPC, Glacier, IAM, CloudWatch, KMS) to develop and maintain Amazon AWS based cloud solution, with an emphasis on best practice cloud security. • DevOps: Solid experience as a DevOps Engineer in a 24x7 uptime Amazon AWS environment, including automation experience with configuration management tools. • Monitoring Tools: Experience with system monitoring tools (e.g. Nagios, Zabbix etc)
Artificial Learning Systems India Pvt. Ltd. is looking for an exceptional Python Developers who will have a good background in, and understanding of, software systems, and one who has the ability to work closely with the rest of the Engineering team from the early stages of design all the way through identifying and resolving production issues. Candidate Profile: The ideal candidate will be passionate about this role which involves deep knowledge of both the application and the product, and he/she will also believe that automation is key to operating large-scale systems. Education: BE/B.Tech. from reputed College Technical skills required: • 3+ years’ experience as a web developer in Python • Software design skills in product development • Proficiency in a modern open-source NoSQL database, preferably Cassandra • Proficient in HTTP protocol, REST APIs, JSON • Experience with Flask (Must have) Django (Good to have) • Experience with Gunicorn, Celery, RabbitMQ, Supervisor Job Type: Full time, permanent Job Location: Bangalore Who are we? Artificial Learning systems (Artelus) is a 2 year young company, working in the Deep Learning space to solve healthcare problems. The company seeks to make products, which would complement the knowledge and assist clinicians in making faster and more accurate diagnoses. Our team comprises a group of dedicated scientists trying to make the world a healthier place using the latest advances in computer science and machine learning and applying it to the field of medicine and healthcare. Why work with Artelus? We are working on exciting new scientific developments in the area of healthcare, and working with us will get you solid education whatever your level of experience. This is a very exciting opportunity for a young scientist and we look forward to working with you to help you to develop your skills in our R&D center. What does working with Artelus mean to you? • Working in a high energy and challenging environment • Work with International clients • Work in cutting edge technologies • Be a part of an exciting path breaking project • Great environment to work in
Description Auzmor is US HQ’ed, funded SaaS startup focussed on disrupting the HR space. We combine passion, domain expertise and build products with focus on great end user experiences We are looking for Technical Architect to envision, build, launch and scale multiple SaaS products What You Will Do: • Understand the broader strategy, business goals, and engineering priorities of the company and how to incorporate them into your designs of systems, components, or features • Designing applications and architectures for multi-tenant SaaS software • Responsible for the selection and use of frameworks, platforms and design patterns for Cloud based multi-tenant SaaS based application • Collaborate with engineers, QA, product managers, UX designers, partners/vendors, and other architects to build scalable systems, services, and products for our diverse ecosystem of users across apps What you will need • Minimum of 5+ years of Hands on engineering experience in SaaS, Cloud services environments with architecture design and definition experience using Java/JEE, Struts, Spring, JMS & ORM (Hibernate, JPA) or other Server side technologies, frameworks. • Strong understanding of architecture patterns such as multi-tenancy, scalability, and federation, microservices(design, decomposition, and maintenance ) to build cloud-ready systems • Experience with server-side technologies (preferably Java or Go),frontend technologies (HTML/CSS, Native JS, React, Angular, etc.) and testing frameworks and automation (PHPUnit, Codeception, Behat, Selenium, webdriver, etc.) • Passion for quality and engineering excellence at scale What we would love to see • Exposure to Big data -related technologies such as Hadoop, Spark, Cassandra, Mapreduce or NoSQL, and data management, data retrieval , data quality , ETL, data analysis. • Familiarity with containerized deployments and cloud computing platforms (AWS, Azure, GCP)
Looking for Big data Developers in Mumbai Location
APPLY LINK: http://bit.ly/2yipqSE Go through the entire job post thoroughly before pressing Apply. There is an eleven characters french word v*n*i*r*t*e mentioned somewhere in the whole text which is irrelevant to the context. You shall be required to enter this word while applying else application won't be considered submitted. ````````````````````````````````````````````````````````````````````````````````````````````````````` Aspirant - Data Science & AI Team: Sciences Full-Time, Trainee Bangaluru, India Relevant Exp: 0 - 10 Years Background: Top Tier institute Compensation: Above Standards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Busigence is a Decision Intelligence Company. We create decision intelligence products for real people by combining data, technology, business, and behavior enabling strengthened decisions. Scaling established startup by IIT alumni innovating & disrupting marketing domain through artificial intelligence. We bring those people onboard who are dedicated to deliver wisdom to humanity by solving the world’s most pressing problems differently thereby significantly impacting thousands of souls, everyday. We are a deep rooted organization with six years of success story having worked with folks from top tier background (IIT, NSIT, DCE, BITS, IIITs, NITs, IIMs, ISI etc.) maintaining an awesome culture with a common vision to build great data products. In past we have served fifty five customers and presently developing our second product, Robonate. First was emmoQ - an emotion intelligence platform. Third offering, H2HData, an innovation lab where we solve hard problems through data, science, & design. We work extensively & intensely on big data, data science, machine learning, deep learning, reinforcement learning, data analytics, natural language processing, cognitive computing, and business intelligence. First-and-Foremost Before you dive-in exploring this opportunity and press Apply, we wish you to evaluate yourself - We are looking for right candidate, not the best candidate. We love to work with someone who can mandatorily gel with our vision, beliefs, thoughts, methods, and values --- which are aligned with what can be expected in a true startup with ambitious goals. Skills are always secondary to us. Primarily, you must be someone who is not essentially looking for a job or career, rather starving for a challenge, you yourself probably don't know since when. A book can be written on what an applicant must have before joining a <real startup with meaningful product>. For brevity, in nutshell, we need these three in you: 1. You must be [super sharp] (Just an analogue, but Irodov, Mensa, Feynman, Polya, ACM, NIPS, ICAAC, BattleCode, DOTA etc should have been your Done stuff. Can you relate solution 1 to problem 2? or Do you get confused even when solved similar problem in past? Are you able to grasp problem statement in one go? or get hanged?) 2. You must be [extremely energetic] (Do you raise eyebrows when asked to stretch your limits, both in terms of complexity or extra hours to put in? What comes first in your mind, let's finish it today or this can be done tomorrow too? Its Friday 10 PM at work -Tired?) 3. You must be [honourably honest] (Do you tell others what you think, or what they want to hear? Later is good for sales team for their customers, not for this role. Are you honest with your work? intrinsically with yourself first?) You know yourself the best. If not ask your loved ones and then decide. We clearly need exceedingly motivated people with entrepreneurial traits, not employee mindset - not at all. This is an immediate requirement. We shall have an accelerated interview process for fast closure - you would be required to be proactive and responsive. Real ROLE We are looking for students, graduates, and experienced folks with real passion for algorithms, computing, and analysis. You would be required to work with our sciences team on complex cases from data science, machine learning, and business analytics. Mandatory R1. Must know in-and-out of functional programming (https://docs.python.org/2/howto/functional.html) in Python with strong flair for data structures, linear algebra, & algorithms implementation. Only oops cannot not be accepted. R2. Must have soiled hands on methods, functions, and workarounds in NumPy, Pandas, Scikit-learn, SciPy, Stasmodels - collectively you should have implemented atleast 100 different techniques (we averaged out this figure with our past aspirants who have worked on this role) R3. Must have implemented complex mathematical logics through functional map-reduce framework in Python R4. Must have understanding on EDA cycle, machine learning algorithms, hyper-parameter optimization, ensemble learning, regularization, predictions, clustering, associations - at essential level R5. Must have solved atleast five problems through data science & machine learning. Mere coursera learning and/or Kaggle offline attempts shall not be accepted Preferred R6. Good to have required callibre to learn PySpark within four weeks once joined us R7. Good to have required callibre to grasp underlying business for a problem to be solved R8. Good to have understanding on CNNs, RNNs, MLP, Auto-Encoders - at basic level R9. Good to have solved atleast three problems through deep learning. Mere coursera learning and/or Kaggle offline attempts shall not be accepted R10. Good to have worked on pre-processing techniques for images, audio, and text - OpenCV, Librosa, NLTK R11. Good to have used pre-trained models - VGGNET, Inception, ResNet, WaveNet, Word2Vec Ideal YOU Y1. Degree in engineering, or any other data-heavy field at Bachelors level or above from a top tier institute Y2. Relevant experience of 0 - 10 years working on real-world problems in a reputed company or a proven startup Y3. You are a fanatical implementer who love to spend time with content, codes & workarounds, more than your loved ones Y4. You are true believer that human intelligence can be augmented through computer science & mathematics and your survival vinaigrette depends on getting the most from the data Y5. You are an entrepreneur mindset with ownership, intellectuality, & creativity as way to work. These are not fancy words, we mean it Actual WE W1. Real startup with Meaningful products W2. Revolutionary not just disruptive W3. Rules creators not followers W4. Small teams with real brains not herd of blockheads W5. Completely trust us and should be trusted back Why Us In addition to the regular stuff which every good startup offers – Lots of learning, Food, Parties, Open culture, Flexible working hours, and what not…. We offer you: <Do your Greatest work of life> You shall be working on our revolutionary products which are pioneer in their respective categories. This is a fact. We try real hard to hire fun loving crazy folks who are driven by more than a paycheck. You shall be working with creamiest talent on extremely challenging problems at most happening workplace. How to Apply You should apply online by clicking "Apply Now". For queries regarding an open position, please write to email@example.com For more information, visit http://www.busigence.com Careers: http://careers.busigence.com Research: http://research.busigence.com Jobs: http://careers.busigence.com/jobs/data-science Feel right fit for the position, mandatorily attach PDF resume highlighting your A. Key Skills B. Knowledge Inputs C. Major Accomplishments D. Problems Solved E. Submissions – Github/ StackOverflow/ Kaggle/ Euler Project etc. (if applicable) If you don't see this open position that interests you, join our Talent Pool and let us know how you can make a difference here. Referrals are more than welcome. Keep us in loop.
Position Description Demonstrates up-to-date expertise in Software Engineering and applies this to the development, execution, and improvement of action plans Models compliance with company policies and procedures and supports company mission, values, and standards of ethics and integrity Provides and supports the implementation of business solutions Provides support to the business Troubleshoots business, production issues and on call support. Minimum Qualifications BS/MS in Computer Science or related field 5+ years’ experience building web applications Solid understanding of computer science principles Excellent Soft Skills Understanding the major algorithms like searching and sorting Strong skills in writing clean code using languages like Java and J2EE technologies. Understanding how to engineer the RESTful, Micro services and knowledge of major software patterns like MVC, Singleton, Facade, Business Delegate Deep knowledge of web technologies such as HTML5, CSS, JSON Good understanding of continuous integration tools and frameworks like Jenkins Experience in working with the Agile environments, like Scrum and Kanban. Experience in dealing with the performance tuning for very large-scale apps. Experience in writing scripting using Perl, Python and Shell scripting. Experience in writing jobs using Open source cluster computing frameworks like Spark Relational database design experience- MySQL, Oracle, SOLR, NoSQL - Cassandra, Mango DB and Hive. Aptitude for writing clean, succinct and efficient code. Attitude to thrive in a fun, fast-paced start-up like environment
Position Description Assists in providing guidance to small groups of two to three engineers, including offshore associates, for assigned Engineering projects Demonstrates up-to-date expertise in Software Engineering and applies this to the development, execution, and improvement of action plans Generate weekly, monthly and yearly report using JIRA and Open source tools and provide updates to leadership teams. Proactively identify issues, identify root cause for the critical issues. Work with cross functional teams, Setup KT sessions and mentor the team members. Co-ordinate with Sunnyvale and Bentonville teams. Models compliance with company policies and procedures and supports company mission, values, and standards of ethics and integrity Provides and supports the implementation of business solutions Provides support to the business Troubleshoots business, production issues and on call support. Minimum Qualifications BS/MS in Computer Science or related field 8+ years’ experience building web applications Solid understanding of computer science principles Excellent Soft Skills Understanding the major algorithms like searching and sorting Strong skills in writing clean code using languages like Java and J2EE technologies. Understanding how to engineer the RESTful, Micro services and knowledge of major software patterns like MVC, Singleton, Facade, Business Delegate Deep knowledge of web technologies such as HTML5, CSS, JSON Good understanding of continuous integration tools and frameworks like Jenkins Experience in working with the Agile environments, like Scrum and Kanban. Experience in dealing with the performance tuning for very large-scale apps. Experience in writing scripting using Perl, Python and Shell scripting. Experience in writing jobs using Open source cluster computing frameworks like Spark Relational database design experience- MySQL, Oracle, SOLR, NoSQL - Cassandra, Mango DB and Hive. Aptitude for writing clean, succinct and efficient code. Attitude to thrive in a fun, fast-paced start-up like environment
Bigdata, Business intelligence , python, R with their skills
Minimum 5+ years of experience as a manager and overall 10+ years of industry experience in a variety of contexts, during which you've built scalable, robust, and fault-tolerant systems. You have a solid knowledge of the whole web stack: front-end, back-end, databases, cache layer, HTTP protocol, TCP/IP, Linux, CPU architecture, etc. You are comfortable jamming on complex architecture and design principles with senior engineers. Bias for action. You believe that speed and quality aren't mutually exclusive. You've shown good judgement about shipping as fast as possible while still making sure that products are built in a sustainable, responsible way. Mentorship/ Guidance. You know that the most important part of your job is setting the team up for success. Through mentoring, teaching, and reviewing, you help other engineers make sound architectural decisions, improve their code quality, and get out of their comfort zone. Commitment. You care tremendously about keeping the Uber experience consistent for users and strive to make any issues invisible to riders. You hold yourself personally accountable, jumping in and taking ownership of problems that might not even be in your team's scope. Hiring know-how. You're a thoughtful interviewer who constantly raises the bar for excellence. You believe that what seems amazing one day becomes the norm the next day, and that each new hire should significantly improve the team. Design and business vision. You help your team understand requirements beyond the written word and you thrive in an environment where you can uncover subtle details.. Even in the absence of a PM or a designer, you show great attention to the design and product aspect of anything your team ships.
Couture.ai is building a patent-pending AI platform targeted towards vertical-specific solutions. The platform is already licensed by Reliance Jio and few European retailers, to empower real-time experiences for their combined >200 million end users. For this role, credible display of innovation in past projects (or academia) is a must. We are looking for a candidate who lives and talks Data & Algorithms, love to play with BigData engineering, hands-on with Apache Spark, Kafka, RDBMS/NoSQL DBs, Big Data Analytics and handling Unix & Production Server. Tier-1 college (BE from IITs, BITS-Pilani, top NITs, IIITs or MS in Stanford, Berkley, CMU, UW–Madison) or exceptionally bright work history is a must. Let us know if this interests you to explore the profile further.
Couture.ai is building a patent-pending AI platform targeted towards vertical-specific solutions. The platform is already licensed by Reliance Jio and few European retailers, to empower real-time experiences for their combined >200 million end users. The founding team consists of BITS Pilani alumni with experience of creating global startup success stories. The core team, we are building, consists of some of the best minds in India in artificial intelligence research and data engineering. We are looking for multiple different roles with 2-7 year of research/large-scale production implementation experience with: - Rock-solid algorithmic capabilities. - Production deployments for massively large-scale systems, real-time personalization, big data analytics, and semantic search. - Or credible research experience in innovating new ML algorithms and neural nets. Github profile link is highly valued. For right fit into the Couture.ai family, compensation is no bar.
RESPONSIBILITIES: 1. Full ownership of Tech right from driving product decisions to architect to deployment. 2. Develop cutting edge user experience and build cutting edge technology solutions like instant messaging in poor networks, live-discussions, live-videos optimal matching. 3. Using Billions of Data Points to Build User Personalisation Engine 4. Building Data Network Effects Engine to increase Engagement & Virality 5. Scaling the Systems to Billions of Daily Hits. 6. Deep diving into performance, power management, memory optimisation & network connectivity optimisation for the next Billion Indians 7. Orchestrating complicated workflows, asynchronous actions, and higher order components 8. Work directly with Product and Design teams REQUIREMENTS: 1. Should have Hacked some (computer or non-computer) system to your advantage. 2. Built and managed systems with a scale of 10Mn+ Daily Hits 3. Strong architectural experience 4. Strong experience in memory management, performance tuning and resource optimisations 5. PREFERENCE- If you are a woman or an ex-entrepreneur or having a CS bachelor’s degree from IIT/BITS/NIT P.S. If you don't fulfil one of the requirements, you need to be exceptional in the others to be considered.
• Good experience in Python and SQL • Plus will be experience in Hive / Presto • Strong skills in using Python / R for building data pipelines and analysis • Good programming background - o Writing efficient and re-usable code o Comfort with working on the CLI and with tools like GitHub etc. Other softer aspects that are important - • Fast learner - No matter how much programming a person has done in the past, willing to learn new tools is the key • An eye for standardization and scalability of processes - the person will not need to do this alone but it will help us for everyone on the team to have this orientation • A generalist mindset - Everyone on the team will need to also work on front-end tools (Tableau and Unidash) so openness to playing a little outside the comfort zone