At Entropik Technologies, we build systems that measure and analyzes human emotions at an unprecedented scale, with accuracy, speed, and mission-critical availability. We work with some of the leading brands and agencies across the globe who utilizes our platform to improve overall customer experience, understand consumer behavior and their subconscious responses. The Data Science team at Entropik is a high profile team that is a center of innovation for the company and a major contributor to the company's core products. The types of challenges we solve have attracted people from industry and academia with diverse backgrounds. We're passionate about maintaining an open and collaborative environment, where team members bring their own unique style of thinking and tools to the table. Responsibilities: Work on challenging fundamental data science problems in affective computing Propose and develop solutions independently and work with other data scientists Drive the collection of new data and the refinement of existing data sources Continuous focus on enhancing the current models with an overall goal of improving the accuracy of different emotion touch points Prepare white papers, scientific publications and conference presentations Work closely with product and engineering teams to identify and answer important product questions Communicate findings to product managers and engineer Analyze and interpret the results Develop best practices for instrumentation and experimentation and communicate those to product engineering teams Requirements: Masters or Ph.D. in a relevant technical (deep learning, machine learning, computer science, physics, mathematics, statistics, or related field), or 4+ years' experience in a relevant role Extensive experience solving analytical problems using quantitative approaches using machine learning methods Should be experienced in Computer Vision and Visual Feature Extraction. Experienced with Deep Learning Libraries like Tensorflow, Pytorch and architectures like CNN, RCNN. Track record of using advanced statistical methods, information retrieval, data mining techniques Comfort manipulating and analyzing complex, high-volume, high-dimensional data from varying sources A strong passion for empirical research and for answering hard questions with data A flexible analytic approach that allows for results at varying levels of precision Fluency with at least one scripting language such as Python Experience with at least some of the following machine learning libraries: scikit-learn, H2O, SparkML, etc Experience with practical data science: source control workflows, deploying machine learning models in production, real-time machine learning.
At Entropik Technologies, we build systems that measure and analyzes human emotions at an unprecedented scale, with accuracy, speed, and mission-critical availability. We work with some of the leading brands and agencies across the globe who utilizes our platform to improve overall customer experience and understand their consumer’s behavior. If you are excited about the opportunity to learn and work on effective computing systems, enjoy streamlining and automating routine tasks, and work on leading-edge software deployments. Come challenge yourself at Entropik Technologies. Responsibilities Design, Implement and Support CI/CD pipeline Participate in the design phase of latency-driven high scale systems Write scripts to monitor systems and automate routine tasks Maintain our infrastructure across multiple technologies to ensure "ZERO" downtime Design and develop tooling to assist development teams Experiment new tools and/or processes to improve the team routines and communication Troubleshoot issues across the entire stack (diagnose software, application, and network) Document current and future configuration processes and policies Take ownership of existing systems including all tools, technologies, and licenses used by Entropik Work with product management to ensure DevOps is aligned to the overall vision of the company and can scale on demand Build, support and maintain all automated test environment build and code deployment scripts using a mixture of the following: Jenkins, Bitbucket, GIT, Gradle, OpenShift, Artifactory, Cloud virtualized services, JBOSS, Tomcat, Chef Requirements Minimum of 3 - 5 years of experience in software development and DevOps, specifically managing AWS such as EC2s, RDS, Elastic Cache, S3, IAM, cloud trail and other services provided by AWS. Experience Building a multi-region highly available auto-scaling infrastructure that optimizes performance and cost. plan for future infrastructure as well as Maintain & optimize existing infrastructure. Conceptualize, architect and build automated deployment pipelines in a CI/CD environment like Jenkins. Conceptualize, architect and build a containerized infrastructure using Docker, Mesosphere or similar SaaS platforms. Conceptualize, architect and build a secured network utilizing VPCs with inputs from the security team. Work with developers & QA to institute a policy of Continuous Integration with Automated testing Architect, build and manage dashboards to provide visibility into delivery, production application functional and performance status. Work with developers to institute systems, policies, and workflows which allow for a rollback of deployments Triage release of applications to the production environment on a daily basis. Interface with developers and triage SQL queries that need to be executed in production environments. Maintain 24/7 on-call rotation to respond and support troubleshooting of issues in production. Assist the developers and on calls for other teams with post, follow up and review of issues affecting production availability. Minimum of 2 years’ experience in Ansible. Must have written playbook to automate the provisioning of AWS infrastructure as well as automation of routine maintenance tasks. Must have had prior experience automating deployments to production and lower environments. Experience with APM tools like New Relic and log management tools. Our entire platform is hosted on AWS, comprising of web applications, web services, RDS, Redis, and Elastic Search clusters and several other AWS resources like EC2, S3, Cloud front, Route53 and SNS. Essential Functions System Architecture Process Design and Implementation. Minimum of 2 years scripting experience in Ruby/Python (Preferable) and Shell Web Application Deployment Systems Continuous Integration tools (Ansible)Establishing and enforcing Network Security Policy (AWS VPC, Security Group) & ACLs. Establishing and enforcing systems monitoring tools and standards. Establishing and enforcing Risk Assessment policies and standards. Management of big data solutions (Hadoop, Spark) and large messaging infrastructures (Kafka, RabbitMQ) Innovative, creative mindset - out-of-the-box thinker Positive, “can do” attitude who isn’t afraid of a challenge Excellent English skills