Job Description : We are looking for someone who can work with the platform or analytics vertical to extend and scale our product-line. Every product line has the dependency on other products within LimeTray eco-system and SSE- 2 is expected to collaborate with different internal teams to own stable and scalable releases. While every product-line has their own tech stack - different products have different technologies and it's expected that Person is comfortable working across all of them as and when needed. Some of the technologies/frameworks that we work on - Microservices, Java, Node, MySQL, MongoDB, Angular, React, Kubernetes, AWS, Python Requirements : - Minimum 3-year work experience in building, managing and maintaining Python based backend applications - B.Tech/BE in CS from Tier 1/2 Institutes - Strong Fundamentals of Data Structures and Algorithms - Experience in Python & Design Patterns - Expert in git, unit tests, technical documentation and other development best practises - Worked with SQL & NoSQL databases (Cassandra, MYSQL) - Understanding of async programming. Knowledge in handling messaging services like pubsub or streaming (Eg: Kafka, ActiveMQ, RabbitMQ) - Understanding of Algorithm, Data structures & Server Management - Understanding microservice or distributed architecture - Delivered high-quality work with a significant contribution - Experience in Handling small teams - Has good debugging skills - Has good analytical & problem-solving skills What we are looking for : - Ownership Driven - Owns end to end development - Team Player - Works well in a team. Collaborates with & outside the team. - Communication - Speaks and writes clearly and articulately. Maintains this standard in all forms of written communication including email. - Proactive & Persistence - Acts without being told to and demonstrates a willingness to go the distance to get something done - Develops emotional bonding for the product and does what is good for the product. - Customer first mentality. Understands customers pain and works towards the solutions. - Honest & always keeps high standards. - Expects the same form the team - Strict on Quality and Stability of the product.
Job Description We are looking for someone who can work in the Analytics vertical to extend and scale our product-line. Every product line has the dependency on Analytics vertical within LimeTray eco-system and SSE-2 is expected to collaborate with internal teams to own stable and scalable releases. While every product-line has their own tech stack and it's expected that candidate is comfortable working across all of them as and when needed. Some of the technologies/frameworks that we work on - Microservices, Java, Node, Python, MySQL, MongoDB, Cassandra, Angular, React, Kubernetes, AWS Requirements: Minimum 4-years work experience in building, managing and maintaining Analytics applications B.Tech/BE in CS/IT from Tier 1/2 Institutes Experience in Handling small teams Strong Fundamentals of Data Structures and Algorithms Good analytical & problem-solving skills Strong hands-on experience in Python In-depth Knowledge of queueing systems (Kafka/ActiveMQ/RabbitMQ) Experience in building Data pipelines & Real-time Analytics Systems Experience in SQL (MYSQL) & NoSQL (Mongo/Cassandra) databases is a plus Understanding of Service Oriented Architecture Delivered high-quality work with a significant contribution Expert in git, unit tests, technical documentation and other development best practices What we are looking for: Customer first mentality. Develops emotional bonding for the product and does right prioritization. Ownership Driven - Owns end to end development Proactive & Persistence - Acts without being told to and demonstrates a willingness to go the distance to get something done Strict on Quality and Stability of the product. Communication - Speaks and writes clearly and articulately. Maintains this standard in all forms of written communication. Team Player
As a DevOps Engineer, you will be responsible for managing and building upon the infrastructure that supports our data intelligence platform. You'll also be involved in building tools and establishing processes to empower developers to deploy and release their code seamlessly.Across teams, we will look up to you to make key decisions for our infrastructure, networking and security. You will also own, scale, and maintain the compute and storage infrastructure for various product teams.The ideal DevOps Engineers possess a solid understanding of system internals and distributed systems. They understand what it takes to work in a startup environment and have the zeal to establish a culture of infrastructure awareness and transparency across teams and products. They fail fast, learn faster, and execute almost instantly.Technology Stack: Configuration management tools (Ansible/Chef/Puppet), Cloud Service Providers (AWS/DigitalOcean), Docker+Kubernetes ecosystem is a plus.WHY YOU?* Because you love to take ownership of the infrastructure allowing developers to deploy and manage microservices at scale.* Because you love tinkering with and building upon new tools and technologies to make your work easier and streamlined with the industry's best practices.* Because you have the ability to analyze and optimize performance in high-traffic internet applications.* Because you take pride in building scalable and fault-tolerant infrastructural systems.* Because you see explaining complex engineering concepts and design decisions to the less tech savvy as an interesting challenge.IN MONTH 1, YOU WILL...* Learn about the products and internal tools that power our data intelligence platform.* Understand the underlying infrastructure and play around with the tools used to manage it.* Get familiar with the current architectural challenges arising from handling data and web traffic at scale.IN MONTH 3, YOU WILL...* Become an integral part of the architectural decisions taken across teams and products.* Play a pivotal role in establishing a culture of infrastructure awareness and transparency across the company.* Become the go-to person for engineers to get help solving issues with performance and scale.IN MONTH 6 (AND BEYOND), YOU WILL...* Hire a couple of engineers to strengthen the team and build systems to help manage high-volume and high-velocity data.* Be involved, along with the DevOps team, in tasks ranging from tracking statistics and managing alerts to deploying new hosts and debugging intricate production issues.About SocialCopsSocialCops is a data intelligence company that is empowering leaders in organizations globally including the United Nations & Unilever. Our platform powers over 150 organizations across 28 countries. As a pioneering tech startup, SocialCops was recognized in the list of Technology Pioneers 2018 by World Economic Forum and by the New York Times in the list of 30 global visionaries. We were also part of the Google Launchpad Accelerator 2018. Aasaan jobs named SocialCops as one of the best Indian startups to work for in 2018.Read more about our work and case studies: https://socialcops.com/case-studies/Watch our co-founder's TEDx talk on how big data can influence decisions that matter: https://www.youtube.com/watch?v=C6WKt6fJisoWant to know how much impact you can drive in under a year at SocialCops? See our 2017 year in review: https://socialcops.com/2017/For more information on our hiring process, check out our blog: https://blog.socialcops.com/inside-sc/team-culture/interested-joining-socialcops-team-heres-need/
Role Brief: 6 + years of demonstrable experience designing technological solutions to complex data problems, developing & testing modular, reusable, efficient and scalable code to implement those solutions. Brief about Fractal & Team : Fractal Analytics is Leading Fortune 500 companies to leverage Big Data, analytics, and technology to drive smarter, faster and more accurate decisions in every aspect of their business. Our Big Data capability team is hiring technologists who can produce beautiful & functional code to solve complex analytics problems. If you are an exceptional developer and who loves to push the boundaries to solve complex business problems using innovative solutions, then we would like to talk with you. Job Responsibilities : Provides technical leadership in BigData space (Hadoop Stack like M/R, HDFS, Pig, Hive, HBase, Flume, Sqoop, etc..NoSQL stores like Cassandra, HBase etc) across Fractal and contributes to open source Big Data technologies. Visualize and evangelize next generation infrastructure in Big Data space (Batch, Near RealTime, RealTime technologies). Evaluate and recommend Big Data technology stack that would align with company's technology Passionate for continuous learning, experimenting, applying and contributing towards cutting edge open source technologies and software paradigms Drive significant technology initiatives end to end and across multiple layers of architecture Provides strong technical leadership in adopting and contributing to open source technologies related to BigData across the company. Provide strong technical expertise (performance, application design, stack upgrades) to lead Platform Engineering Defines and Drives best practices that can be adopted in BigData stack. Evangelizes the best practices across teams and BUs. Drives operational excellence through root cause analysis and continuous improvement for BigData technologies and processes and contributes back to open source community. Provide technical leadership and be a role model to data engineers pursuing technical career path in engineering Provide/inspire innovations that fuel the growth of Fractal as a whole EXPERIENCE : Must Have : Ideally, This Would Include Work On The Following Technologies Expert-level proficiency in at-least one of Java, C++ or Python (preferred). Scala knowledge a strong advantage. Strong understanding and experience in distributed computing frameworks, particularly Apache Hadoop 2.0 (YARN; MR & HDFS) and associated technologies -- one or more of Hive, Sqoop, Avro, Flume, Oozie, Zookeeper, etc.Hands-on experience with Apache Spark and its components (Streaming, SQL, MLLib) is a strong advantage. Operating knowledge of cloud computing platforms (AWS, especially EMR, EC2, S3, SWF services and the AWS CLI) Experience working within a Linux computing environment, and use of command line tools including knowledge of shell/Python scripting for automating common tasks Ability to work in a team in an agile setting, familiarity with JIRA and clear understanding of how Git works. A technologist - Loves to code and design In addition, the ideal candidate would have great problem-solving skills, and the ability & confidence to hack their way out of tight corners. Relevant Experience : Java or Python or C++ expertise Linux environment and shell scripting Distributed computing frameworks (Hadoop or Spark) Cloud computing platforms (AWS) Good to have : Statistical or machine learning DSL like R Distributed and low latency (streaming) application architecture Row store distributed DBMSs such as Cassandra Familiarity with API design Qualification: B.E/B.Tech/M.Tech in Computer Science or related technical degree OR Equivalent
Job Requirement Installation, configuration and administration of Big Data components (including Hadoop/Spark) for batch and real-time analytics and data hubs Capable of processing large sets of structured, semi-structured and unstructured data Able to assess business rules, collaborate with stakeholders and perform source-to-target data mapping, design and review. Familiar with data architecture for designing data ingestion pipeline design, Hadoop information architecture, data modeling and data mining, machine learning and advanced data processing Optional - Visual communicator ability to convert and present data in an easy comprehensible visualization using tools like D3.js, Tableau To enjoy being challenged, solve complex problems on a daily basis Proficient in executing efficient and robust ETL workflows To be able to work in teams and collaborate with others to clarify requirements To be able to tune Hadoop solutions to improve performance and end-user experience To have strong co-ordination and project management skills to handle complex projects Engineering background
Requirements: Minimum 4-years work experience in building, managing and maintaining Analytics applications B.Tech/BE in CS/IT from Tier 1/2 Institutes Strong Fundamentals of Data Structures and Algorithms Good analytical & problem-solving skills Strong hands-on experience in Python In depth Knowledge of queueing systems (Kafka/ActiveMQ/RabbitMQ) Experience in building Data pipelines & Real time Analytics Systems Experience in SQL (MYSQL) & NoSQL (Mongo/Cassandra) databases is a plus Understanding of Service Oriented Architecture Delivered high-quality work with a significant contribution Expert in git, unit tests, technical documentation and other development best practices Experience in Handling small teams
We are looking for a Machine Learning Developer who possesses apassion for machine technology & big data and will work with nextgeneration Universal IoT platform.Responsibilities:•Design and build machine that learns , predict and analyze data.•Build and enhance tools to mine data at scale• Enable the integration of Machine Learning models in Chariot IoTPlatform•Ensure the scalability of Machine Learning analytics across millionsof networked sensors•Work with other engineering teams to integrate our streaming,batch, or ad-hoc analysis algorithms into Chariot IoT's suite ofapplications•Develop generalizable APIs so other engineers can use our workwithout needing to be a machine learning expert
-Precily AI: Automatic summarization, shortening a business document, book with our AI. Create a summary of the major points of the original document. AI can make a coherent summary taking into account variables such as length, writing style, and syntax. We're also working in the legal domain to reduce the high number of pending cases in India. We use Artificial Intelligence and Machine Learning capabilities such as NLP, Neural Networks in Processing the data to provide solutions for various industries such as Enterprise, Healthcare, Legal.
Looking for a technically sound and excellent trainer on big data technologies. Get an opportunity to become popular in the industry and get visibility. Host regular sessions on Big data related technologies and get paid to learn.
To introduce myself I head Global Faculty Acquisition for Simplilearn. About My Company: SIMPLILEARN is a company which has transformed 500,000+ carriers across 150+ countries with 400+ courses and yes we are a Registered Professional Education Provider providing PMI-PMP, PRINCE2, ITIL (Foundation, Intermediate & Expert), MSP, COBIT, Six Sigma (GB, BB & Lean Management), Financial Modeling with MS Excel, CSM, PMI - ACP, RMP, CISSP, CTFL, CISA, CFA Level 1, CCNA, CCNP, Big Data Hadoop, CBAP, iOS, TOGAF, Tableau, Digital Marketing, Data scientist with Python, Data Science with SAS & Excel, Big Data Hadoop Developer & Administrator, Apache Spark and Scala, Tableau Desktop 9, Agile Scrum Master, Salesforce Platform Developer, Azure & Google Cloud. : Our Official website : www.simplilearn.com If you're interested in teaching, interacting, sharing real life experiences and passion to transform Careers, please join hands with us. Onboarding Process • Updated CV needs to be sent to my email id , with relevant certificate copy. • Sample ELearning access will be shared with 15days trail post your registration in our website. • My Subject Matter Expert will evaluate you on your areas of expertise over a telephonic conversation - Duration 15 to 20 minutes • Commercial Discussion. • We will register you to our on-going online session to introduce you to our course content and the Simplilearn style of teaching. • A Demo will be conducted to check your training style, Internet connectivity. • Freelancer Master Service Agreement Payment Process : • Once a workshop/ Last day of the training for the batch is completed you have to share your invoice. • An automated Tracking Id will be shared from our automated ticketing system. • Our Faculty group will verify the details provided and share the invoice to our internal finance team to process your payment, if there are any additional information required we will co-ordinate with you. • Payment will be processed in 15 working days as per the policy this 15 days is from the date of invoice received. Please share your updated CV to get this for next step of on-boarding process.
Transporter is an AI-enabled location stack that helps companies improve their commerce, engagement or operations through their mobile apps for the next generation of online commerce
We are a team with a mission, A mission to create and deliver great learning experiences to engineering students through various workshops and courses. If you are an industry professional and :- See great scope of improvement in higher technical education across the country and connect with our purpose of impacting it for good. Keen on sharing your technical expertise to enhance the practical learning of students. You are innovative in your ways of creating content and delivering them. You don’t mind earning few extra bucks while doing this in your free time. Buzz us at firstname.lastname@example.org and let us discuss how together we can take technological education in the country to new heights.